var/home/core/zuul-output/0000755000175000017500000000000015147165226014536 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015147167712015504 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000210506115147167634020271 0ustar corecoreikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs$r.k9Gf̅%~6"f7?_eK|y}%oysシ7޾vZ0|7Z8T__mZ^˯>8m֟~ȋgliUxmX.r \T|3|#חUz|nWy3#)a "b BLc?Zt9_lY.׳KW@_I2JQʀCx~JkI|YmS7jV믵̶Y-~8 N Rm(of`\r\L>{Jm 0;vRΌ>dQQFh k0&S V&@i{ C2i1Gdē $Kٻւ(Ĩ$#TLX h~lx}%n6:SFAWW.%T%2gL[: ԓ$aсdt֫-g[ .`:J ]HmS>v5gCh31 )Kh3i J1hG{aD4iӌçN/e]5o9iF]u54!h/9Y@$9GAOI=2,!N{\00{B"唄(-"Η.U) _.UX-?0haxC}~xr\t2Hgb*t.-|Hp(-J QśV{4ߙouxXkLFyS 5(I @:}ɏ`tAۓw](]0'Pd]ɻ_\-x&gGO_%k3ƟN{7^HGAr Mme)M,O!kX7qaYB ɻ>@J$tι#&i 5gܘ=ЂK\IIɻ}b{|;_-i!vg''H_`!GKF5/O]Zڢ>:O񨡺ePӋ& ofEnL!?lJJYq=Wo/"IyQ4\:y| 6h6dQX0>HTG5QOuxMe 1ķ/5^Z-y`)͐-o΁qGWo(C U ?}aK+dLdW3RG؍:-~<*KmrI,7k^i̸.y ^t }|#qgb2oII"9 1"6Dk_ȾcmXQj#3hEEH*Of äE@O0~yot3iYhKjWlwC[A)햖>r?tRWU1o6jjr<~Tq> `=tJ!aݡ=h6YݭȾju\0Ac/T%;m]~S`#u.Џ1qNp&gK60nqtƅ": C@!P q]G0,d#1}Uli}[H?)M"뛲@.Cs*H _0:O ^~řc6WK JJ5Z<;J_O{.Z8Y CEO+^&HqZY PTUJ2dic3w ?YQgpa` Z_pX)𳧛ƾ9U ^};Էڲ7J9@ kV%g6Q{jv *ruI[|A֐M'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l{\Jk}8gf) afs'oIfzZ,I<)9qf e%dhy:O40n'c}g1XҸuFiƠIkaIx( +")OtZ l^ZNCQ6tffEmEφǽ{jt'/#=( %X$=rᫌqpMl)QpL F2G rZ5nmOQq9TAQ;mM9pD6 N`sC4na~Uc)(l fJ>]cNdusmUSTYh>Eeք DKiP`3 aezH5^n)}k~hT(d#iI@YUXPKL:3LVY,ndW9W8QufiŒSq3<uqMQhiae̱F+,~C민v= 09WAu{@>4Cb#O\9fǶy{0$S:z4efb#hQ #_ފH&z!HAd |}p TRi*KsmM+1 P0W YW ].PK$Mj-Kp`zbbq$Igǽgr&P29LcIIGAɐ`P-\:BPS`xiP(/T)#ia-64#fڷbCVg峀%ّ sJV<XTtPmƄR$6~ :QbL2}q|Aq0m|Mq+ _ERƻvT񟜾[mm#?,>t?}=˼l?ff>\fbNJid % Jwe`40^^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[۷:>OM=y)֖[Sm5+_?&cj.i ˿7^1+]h,*aklVIkS7d'q N?s5IsbFʶ褢sFUC)(k-C"TQ[7j39_WiZSس:$3ɾ3,<S1wg y &SL9qk;OP> ,դjtah-j:_[7Wg_0K>є0vNۈ/:= T u)1 QLLj`K -D,(7N*,< JDA?VǞ©H\@mϛ~W-ce{0d8}gp/G\2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%1yFX09'A%bDb0!i(`ZWyֻΗ|ִ0-6d :íyFwR1u)X9 f΁U ~5batx|ELU:T'T[G+= ؽZK̡O6rLmȰ (T$ n#b@hpj:˾kj3)M/8`$:) X+ҧSaz}VP1J%+P:Dsƫ%z? +g 0հc0E) 3͛rƯ?e|kiȄK?lm$K/$s_. WM]̍"W%`lO2-"ew@E=! I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%"J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\=-zz$^ 2(H'e=@kҀy>o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ31dEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥ+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏwEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=n*LD6%#-tңx%>MZIt1!ʐDN q$0Y&Hv]9Zq=N1/u&%].]YsR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~, q lKxDȜOY2S3ҁ%mo(YT\3}sѦoY=-- /IDd6Gs =[F۴'c,QAIٰ9JXOz);B= @%AIt0v[Ƿ&FJE͙A~IQ%iShnMІt.޿>q=$ts,cJZڗOx2c6 .1zҪR "ރQ[ TF )㢥M-GicQ\BL(hO7zNa>>'([gc{>/MoD8q̒vv73'9pM&jV3=ɹvYƛ{3iψI4Kp5 d2oOgd||K>R1Qzi#f>夑3KմԔ萴%|xyr>ķx>{E>ZKiEʑ=Lk'-:}0Nn#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uK{Bx-qCfqsn[?ߐr=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$?N |g ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f ߞtŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ K?U7_nMBLϸY&0Ro6Qž W6׳oƄT֗%VD+"eK5c&`X:#;@B@[(K44sBFu M.MNWLlY]K᜴=/Vމ_؞i2gm4 .I|<.a}*sJɪ`E@+B@IFQ BFIg@5ZP[F,Gi7Y{rp`b Y1LȤѧ=+kcBacPXZs'UM}/}6 _aM;Iߊ19\:f6|uV-]fz:햘s >˧BY>ij(o'ciS<{1$E[nP _i5(~p>/+xvvEWZ Fg86}I h5&XӘ8,>b _ z>9!Z>gUŞjnk?pވC8"MϾ#E~#[IkX8T Q:U>fQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$ERz)JC]?X(OPJS3.}cQ'8/]e/dY|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dV6az./f+yGNMG~j/.y>n# r! IcuV.){}@rѣ-A(FrTq1F'/)DkGU⢜'-.w:ME7mf7E7-!1|Lp' .=]Q$b=ݧnAsngir^$W v:?_ͬ5kݰw[!=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;YKOIXA|i{6d/%|3M\Ɖwt'``4ɦ5!뷊lٲ3aSUͮIR$m*xד-52VEէ(u1V#l+}jWcG40V(ET#ʢL*qT،c mP*FrEHhZaԼ 7ͭq{^~ϣJi6idS{x !$ ڣ`F<~ N1V ˫bp,xpU HybPQ:)d磫 E<¥+6>gQ0Jg:شWq*! +{Grg}Z$^BbH@ ]Rml%vIي]7ЏMnM.,d,xmr :C0^T6sc1 y#C/x]R6Vl%U=hcFad8wЌ5qȊy}Nkm7'ugL{rٶ? *1|/+<<CA1jR x` ޑ b' 뇶;2"+8vuؗB7Wds ?)^Mޒő;M])C; !9( kׇdzBn-,1=_<!ط8 :"8Yd,M׺ZG^KMDZC!CJxƛзCtj}gu;gfSSgQ̮: zԊ«Gos4 oW븦S +#uwD}#kAIL7S7\۵*&tW_ oi 1n%B>B%Zt2R[*ܐ57ٗ~7Ē7uۋ%I\Ϝ=43, {ދ DxՈ"4""YX%]^'+|1dx?n=jcj;t=ZQTQg,\y-,2#LX Ӵl'b|ܶ@q|E*f)Qva8DUZ~wF#Te*C΂$oIEɋG\hb0^3q)‡h-[@Z_ShڵhZ ƛdByT y * '(@(^?ˤV9IA}b:]%|R/Ť/.&U!u1~UzE2̅,뺨n$jV꼚c[R*׃u6e9`,&IW-pel VdZT}kJO.yV4=8Ч~,OĈ.碮|3|j$wrK0S#Q~)D(Q{w2i\<|YW9ϒs}vs8XɅ{),򨳼gXiL'l6owQ*Cֿ r]{8s~WHdSw oD6Tly kWw G]VKZ1?i-D>/-</y}-}2hio5iߺ2YX,x_OO)Bv3&9Jcw=Z6>8Iޓ8G_/*ݓRDGWREB@'!03 _䲪p>!Pr=s!5)kvcE(`bՅ&8O>75ϓ6'@E\|tBBvGXB/S3-"Ix*~ktHQA޾cl@31סCYÝ)?*a]QwRYFhJ=x]Ի^聶|W&ǏՎsSO `Jɪl.JWi e%Q{`V2WtV< uD$*7!HUï7vgāÍC泛hlM{lV {D ϗhQ6t/c%& Q8< B@UoY B B/t"BpMȱqռ -[V"4L(mrϷmDϾcn=~:e -ܺ`C#?&Z櫒Z:hjŠY; HPI;({~jX\4g馹%b[Eu}'Ujm[`%)V<^vjeU[vPevgILˤ\F7j5DS馗֒E.KqA3ȴw:apıJo%' ?$P[7VFG07{〬 aۺ~ 0ggٖbxRZUGD 6AFwVbA5)s)3)#o%_xQTLfk>|汯eOuO':KrJ=w qe#IOlOmb3Vw2Ӫw4x|ߧ2Œ{aQ9G=_64 j&B[YS]xXZrvޚm~sȷhЋ}#3]׳4C呧;O U^Gl 9 ,ID_W=լ$w]'N~GU|ڒu?(P m+& ꊥoY Tm]U i2O6FѦAZ ␷\wUk*\ICt\9x|弣Ic-unj4FzR"pﰭpӎëe<`y=]}uX?mш[izljrb*خDʬXtjo t O&M;Vmԯw>5<խ 0à XGƳ~{SCUu!*:XoCWxҙo#1{7蚶5cXBRqҬNeLUxV:wb|g+ BCC׳7XL7$o0}Af泸 親HUU7ygTZ/BԪ&i##ӳijHWZ\*+)ڹD¨kYT.~ UGy }L*x,ItWgT[VK_r-LYe 1Sn@kVںq#3{KZznm 筃wocrm?`U@ŵd .ϡnm`*`̽kz$Py"ٽk|-czǕH߯@iO|B#TooH߷.}s 0{BA[SHc_n.G}mx1`І~C*~*&!m"AwEH1'ݴݴ[ e:{X 7 ?UW&dU'8`Di̍>egxDF4LɆ ذwvO8X M1}udYpm(ꀬS 8/ Ho#Ho' }i\} d?a ޫ?ic!ӷlp=Жnж@y?h7Nhڊ)`f˧:m6 Ti"GvG^Z3kaA& peQcsᓇI0cŖ*K'Prףw連*wip$2pXh>M,oi+t`Ӹ|&G,*k?FJGL=JJW!~5%`,{7NЧQG ϧ,=]+(a?+AQ6'yd*gbMJ'^|YҢ7oNt+pZp1epS݇w{<ӳ귏gG?R-<2}}"pXw4K7.y x: ruʼΠc(&A튯:ƭ7y  b?ĻsjNg5B񺦃qR&k5 i[UෝxW|]pF n'>aӞq,B4qgh 4 -{YTh\ڋѮ"(՝v{@6#_nMlw\$ոTZwWt!hK-J;7XnAN9*"n0f7"-`Du#hxeTf@ٮ?9`qDǝ&N{vꂟKm*: xٍ(G;h06fj"+x<3C^w܅ti8 @4Ixe_7' %X`HcL$'<cpq2`/I^(9 i<1pՐҍ'\:TMr!7:omjD6å!.I&Y [qd5(qb)ʋH+I^#={-S?qQD$uXAǷl߇kº=5.IbQX<% n0@ypY9@{n"쓟@NLYj,x.Vj1 ´9,uh"r|i† oQbjH\aliKA5V.d\Af (`GR6W GOl6-Nm!L|Q ŕܰǪdN}3V}~VQ" 2ή{ *jLGDm͖n61fa7ҁ%>[D`NFrBO5:t񩊍&x3[ \}L]r2%<|WIaqz FYI.H))[4!NԱ zŽxohv C;fp(ŭzU)q~rg-pq!EC6.,pQ3R.,+tX]C҅$],>Ղ _!%$Cf0LGMDFg+ Xn Oe3eCfN|7)oQrfr}x(?VZzԨ!x'maolD(ݓ&Vy1 mьɘO[dOڷMs\{qe?#?9$4~z}_6.syrK]*{;'Y~iږ`ܵ%x)_T%|u8ezjF07{4EYt' ՟W)n? [%A[A^W5EŶCPaCǴ'Cr.@_٬'"M{NL#8yCe+ƽb(˜^V/6c#!h5 \5\{̄q i{o/x6\~;Uoѥn`0J?qg}9 R6p}aP]t G-@:px,+< Wln;\QBS|3ݑ0D*Qd>( *lͺT|9ÖE]mzngU0ҍYK}-XTj ^6GxFߪ>jϮ2_ an\zR4-<@cd{?tpwX|̦\%aG/[-5\# odm;v cAkˠI>WWuE-.[HQ[8HC:RuMNMR/_a> e˄-eʞG(ۑP m3lAeU밲n5 svft2| bwoAL#ܑPs B2b B*v$TlALZ#ڑPk BeB-'~[,lA=uv$قPwPw B uG#z˄z[mOcR\; 4?U3ޟ9ByR$ɫ̇s"\VE9գA^cPOѕgpOFi)n|U%u i,T. {Жi?qϋ"idchuiÂVCPoa*hqǨr(Z,#4կz+*^)<=%^]ȅx(t.&@'Iz (:i ;*@1m>{&Dj6j A[e³!|o6|~cox z!SR 0 @,ˀJ!XЧ1ݫh,;=z{ɂM',yEITLlzv-.|\ &xp\F`\!1־~/%ʓY}3Yyӳnsԛ3 'SEF`MפfS$-)d*3T4pκ'K~4]ﭺ*=vP꣧VӤkwliUxgܰa< ADyLzb"1i>2,AUHb[u vgHQ:<1:(W* 4J$}3ZpS @ Y`t't^Ȝ#ɂ`M$#tDd7nƿ:G\} Mp`JU*::mdK9`R@Na ÄXJ5?%݂K?-rLXH>5d뛬oD?B~̢ r+tY:t(b*OVb!l13w< OX*m`[(t$2ǣY}&r{ *d`$+O`ꓑ7} 'V%-O?x vy}T@rYk\(p>}OӭGخE@h#F`{v,ZFd60Ld9>c=fj-{|Nd\c2K佚\Q&ᑊy$bRSǟ!FzIτ5,6](x8"JˉQ){w60 "cxXlfԒyJǭ2YܗЪ&Y,ˁB9GjZRi?=@[4Adsܨ4e$BxO<n6ҶηNjQ!]=l0l|aV n]Lz$Ĺ4-{KUDGS4sVuY@H @Y"<Ĭhx|3+MgqʽAH< 'AJJ,`4CqOO^N=rȎ4ElcA,| }^xaPkz06)/˾V?+4?d_*?F[_8G1aK]n[IB=+ǹ/aP ,~ WI%HJ5HjWT"_lIe'mnon:[B{Ҽ}ߥۛ9ۛ|qg˻/7H݀~?-nN ~VzAjc/S7g7o+\人ͿoIGsD~=.ܮy 0"s`KM N(!ߞuT[k?+q6-$Ϋ x: !{Q3#عKO>m@3 8Gʟ~MꐕE0}`:eɼES5lӏ?|ˆaq>z_W \C^ͼцwF 328沕Yi3+%] +ؠY硨 #YGtg ~#phUEb[t%%#9MOI>~ P}OjR/5,bKaDF[Z,$NڎPJxASxh%1 !FN1R7{IDN7 U8%tuIpz,ܧkdFUkSQ"$6cİG;/ʯ9^lS'NJ[1Nj=wzn3Aj=7p"$eJOgJƚ^•fF^W&J}Aq#D3$8ƻx4Tqxsyy1{XCck.bR!vYQ )шR?$ ,XC |V}uLY2:A4Qc d:9鎔:/ʛ94" 4 nI!.eX))X4`@}{(4V@Y QY׌P[06KUR-3Q3TU Ӈ`.f59Tb<=xT.ѝW<{γKZ1tw pٸ-`I  g/0Xg% .y\$ ,+^1e>$2 tg|~|[o+8f2 "T\Q%I,7l)ԃ˓@2C ٻH`C*&(*8o:*5(p#֔h7@lNzI%c Ŀ-zl6ݾ$0qQN>zliBcٌm<0L[쭅C[1M_eC1خȇ1A]m1xGȜ5JJ+c)dZI}J*b#ihcs ȍcS--Cc. 8z1>-g$걮`SV8/I w(}2#tIp܈pnFU. [|˯^J؀:%pL#Bk,nCւJdk/B v$7؉QI(Έk^S+^XpF콌ሑtJ #.4:.w_>mO<5wXƗfbu̪ cm_xp{.`8S%)xl1Hteo8l*!S 6HޘDJ`İL lv+iJz.rYc:) w&vL3Mi\ۋڧȼ^73IrSKC LGV5°*azWTG0WAxC+PR+ִksB[ 2tCHӕB%ζDD.wnr6\}N1''kHKWg;TLʙ(7CBf<@2޲>/)6VJ~%0rpḶlb%%p,u5w1E_Da$X.wGwvt?lH}-t[J-DMH:9t[P(ثp@DΚWʙV3ls F.8PrH5V a6J-,W&l 4` H36ackY}\ z~Q;#H`ֻs'~4fcL$=8znz?IAgVE=atv?3ㅢٱ\aZނ9S\!jd?Ba4e峎xgԃiiq\>/Ip12O!'iIϐ8xR_ڔP ckұg<}@/,p:(p ?ZEh`ɜm#K(pLƅA>!HF Oѯa]OuȷK3w)B B\8F XK){"Zl!0 YqwT$#׻ lB˯ڛpnьc#R3c`>c{gW$9lE5çׇ>jێׇ.w񚙅W @Zr"Es5@Kw: m۳׸4{Hv ?-f/wϫO1>?"bL"áJ$CoT" l?O0m| :7ݒZi+ak/*;!rTN_Vw8Rm mSt;mݗ؜,p;xR"u;`\!k< Sºo6^"ES;Xz5z,Ʊ9"\(hQNΑ!9~apsC 0` h.0 G! ȍIы\4UuȒF{S!з\ԔЄ3%tBIp pW0 n'7]se|+8=[K,E%C ,.ݣ@%lfsq*<2ISJ4NN'&U^# 瞓 ,75DI8dtWs$88?"u-|.4Δ( ABlTvȳtr/E,%GQ{,N3M|C?O'H3%TZIxRcC npH,St2H:EV$ޡ̃Ğ`NhS#Q굙D+^L[x'qvBaZi 1^P؄t]S*Fx&?I97V0Xp|)z\Y9׿9,(gu@qw&9ֈkdph6\&~0LR${OQRKrG˼,Ȁ]:,c$<+L%d[hL&"Fm9K!:EnXlv$ `JsM~j=@78}nĊ`:^nn\:N5t_) w;x8yD߫;\J;YvAwzBpheu!d!4Q,Q8uyZ$zv3% ('S(OE [ϯr! hES& z{^c:"w ~X)l覴I\W`cvG.="2 ރ+bRm#SbB bglN77)p bwFn!?26dSYߖ4 7_kP|tؐ9TTYcC`Z14V,Ȑ#*? \n[U?w$\r9|X"ic`Gv 퀈y s <FH:횳/$80Y$ .F Ƒtz=6X:K,ƘRL&yȦ$jo^8'˝j$@*&sL{kɹ :%&ÈaA3|vk^#s;ˊ渝G:`<岺\Η$p"8z(92V-jo`Uh3aR`M}O[M1 ~m|ܘ&:gEwr0GPN?7]{ރRV3Nf0>G˝y7$8{QΔ9k^_$8 ?wG̏8_VIlUΥ:=Z)ii/iJi:qb/2Yrl.7lu8\8ޯۻ%ٻmlU$|? E13wpvERZvb=XvڊXJ&["~E lһvS7ۡ99p()UY*-tDON=GTLjt@Hf– Tp,E)l޲0;/HKdjW}6]YS:6 O|RW֒]7+AR>q]B nTs3Kw6[z$|G,#Q PҰ'|cŌ+3"0 X5duWr)&߃$gbxt'JC h8 }X}%e8v_cW|j/@,c4&f2 UxCLI|ASϋ&`{@\BKޕ7Qg4nlfp7Btj\%Z-"v{9_2mdɪ9i (ǃdJ0&A aO49FEymW_]c?Cppuy-RX%g&mӺNW$uDuav,0ụWv# |t,Qt"jǧA"\ toBYWo}MNb2m3/5w8PSo\̔hDgr.3/->qcW/w-t9W"Wv%+ec󅨅¸KKNbU]E*jߥE߭/~.9KKpõt` nq aenE1KޜE~#@f%*?crifǃ+3=Q%xG;:BWh |Y&5rI/puu%vգ{+)Su*TgkSm+z>`@޾$wt<4[q9ϳ!U9ᑩ؛;ABMQ77r}-yՋ\ͫ G|~Უx]M9WG5,(MtȽ/֑.v\YwV"u>GCAO fQ}9L>Q*s&.k -t]ǩE]/U U?ͩBq JeϦ ZA.sFQI10U+Ӆ$0z$OfZ(#m%fZ/iI} HiD Z;n<6s{3&WaJz0rn>mꆆy6O$YOF'A0Q=+)B9`z$ #'i_@4>y%;4>[=IӘwdOQHcBI3[XMj )Zj)-h0c4ț'$(RI|(>ֱ$.EEG'8߆V|vQѮ56]!tqm{vVW1P˲*Po?ͳ7 ..w76Ѩt ?^ >$_<Ǘ^bvQORT[) NJ wQn F7ܯ]Ug% 0;bkm]^;3҉sq9݉dTn$^3M2ܯ^@m+; x IHMt;п& ?ķD|x X`fٗ>@'D3oH7i8 ZE6bFkotdKpa@?fؑ N.uޅ_?]_!JՎąCQn|Z"$j(cK"b)RLSqqF8Vea0eNŏ1[4D_9I!m'؆cba94. fqIz]/`) dվe}]HUu,$L:7`]j8) \Q /k}L8HSaRqh`V`eqrZz,lq> ٛj BK5㶈}inXoEv,KG#IqT('eƸ3d<>`E3˕ܒ43pXeki%׽2)#cΘ1KA8BGf!v\qC**Se$3 0&T! V*aAFQJ-Ci겁iV(RBzD5F v4e͇,B1G%.a1b`/o!Uz L֗fK/~Y-fv ,(b`\ݟ$(P`sށ#Ds+b6\q(1/| F\~@pV+~ IY7m&% j)՜n,^IQ۴!)F쟰kSGv̫L Z;auyJW &H^J/mp"{s)e5")~f2Y7BI6`V2ԃ`QAY;YMEi3Q#6* {W*@*.{?*@ N,€LVp`c-N%#fMt=kʩ܀8y,V_ Ξ i-B p,%VsK!]#6Uvi?{ϢFr 䰋T971YH,cL #ɔER3ckHZr*R wOD* LԼp?cPkyh&gS8t?Dk[۷Gu'{^f%Gi'ʰ8QγLX)$,N;&ed(4eiJ,O8wN+fTApz> MN34sEPȸ-L1S䄊KD(<'7$+IVqJ;LW@XCldhM,x>s4יf"okg4x1* ~ɜ;|> f}^l;#%kJ7LaBQrjcxJ7LxjNm=ZדyW.Q||0fuO$ `vevZw ǜJ~ٽF+mk:YCcbGѤ9iTiH6!"%TR4bѸoͱC#A/ا}$[[Rt5`7폱5E|1}KSvp%8p h^zX ,bRxg `ZF{M*n y¥*L9xZx+ܹTL: DB4XHT\ި*?*txXH'nhU!c2G l߇i*;ܺVE\c3+4N 7_VRrs2m9fa`ӭ9+#.#kKI(H ͩv$`R-h D(Q2zW Cd7õI2 F (D!%c#SF挲\,Ft?T>qWLLAdoUyB dB5MENBr.HZЄ5 DS=@EKSKtF9c*I`D1g M-R`1'8jQ۔I@,wJa;o(gf)La,Iԅ!l{r֤0pܠD'n0mw. # aH"bcncFFPfjg$|ii0g<lDۛPa*a> MDwd=D^ KLV.NJgdwO Obh'=CNfJ2"\fk#OH=pEh #m߅XK 0iZv0p%C(׭̖ jh1h ښtF\E"Zmp5(ieG5 0VR ;HިFhMd]ݐC&#ccFLqe%]R붚&o ꃖˆ$;Vs+MՀ3\'iCh)Nxu~o-dE80*zy֐{)mJ?up;Aꖓ!Z̆fe}iH'iJipuoM1VtdO@Lkp4f"2U8h-Ѷ 0 e`Zmilx WFYJ̳&{J,e{rZ20L*4"0K1J{ oq紭}J ֬m.}Q6R.D@p2 ܶ5ͱie7_t_Ϋl2ήkGd(a+A6[݉i2̈25^SK}^߰?dk\蛗jfS9m/cf<,wJ.VY1Lg}Po?%GbeHЋk}s|lEEj@LVp*"H %p.RWqTm>7$qJWa^Yi%$y&{nm%0&H; r}u,$/F{&.P4K=ѐz<JFF e;Ɇ)N?snC0ڔK~aJٖ0pmˑA>mS3]+ZKB|OGTD8AXz=j!u^XFym( | iβȟ.+IvgN z"_ⰌdY[:WIl\(L%&`BzװdYǃ( f$o?W3hNWjMI7 :I憃d.mFfex3|~"[՚CMרI:^zXXK hQ^@V7y?=jyNiM[._/%usQ X482Q&E ZKN6Ο*n;Wa?cuEE?::[~7q~<1&[>@E=?~:ᆵ=xylMo=8ΦWbe~ħp8zrL hAkϯ}>12!o&شdY-+ k=i6vY2Cs Ve%]7:5}웥Jθ48W:;tYVCCYoS\I\.&9>}ebmb۔kDڋckr5f>-b*v*.݅ Ĺ9'"oܿ*.+B#,SljQT- M"POE9OyͿOgf./_޸"i>1:g?`6F77% >1>b hw2|8{{|9Ϧquٟ;<5 ^zl8G:BX^+9^7˱RK#79C^@= O@*u)v-c}bt޸#5 c)+1zvw eifPG,puzw蟿h90A{ ATY^]֐pIvm^(}l_@x:7 ^xtk1U*)yUh%݋ahX'}qco88Ե pe7C :b-W8 ~8N@9E44 .[>n^-+G6דlo.xQVnmHrsN,x)hKW;vr뀕6޳]H3|xĽKjMK~eU5LT%E<#rt)7ʭ U4S#wC^e$- ?&$5z}A'pҵ}ׁg%;̲d2L>Lif+wW w-ß#?1V[P>PqWsGQ 1Z]!Z}cvbXtգ?>'#P{Dʢ>JcldYXC;D#LBܹ ͓B C8˝?Ϗ!ƑIngjyNF}U?b=$g2T~ݸ ,ĉ TqSP.UY&9EXdΥ`%HW/1wŤ ?}|ey5Qiݸ>ro<~w,LD}mW@kEXjNQ;κuaΚڒf>Nւt8CDDy,u#3]y˶ڜ>H!lD^'*Gn]Iaیtsέ4HF2f%ԥ"\ [BB5`I.k7jAb-U8fֆzh-iIQߝԪvwCe[nݞ Y5>;~7bv ڈ B?$G1wpgFL5'!`;A e/ Q[C4] U/{p=˘50YPEc_wE0ڬbip1xJe>ir,/QwkQpg gEQ*?^䋇_g.cۋQbu*c/ E3o6,WWk=WxO t.zߕ&ѐztcĴ`П&I(`0u0pmsA]l2 ZW/w'@ _`Fj)mTs 9 }=Y 6dW5ilDY6z?Sxq vZ.I6\)4R{N%! 3)5'a} p !!*I7%ֵpdIw%Y[kprD9"2 asD$Bn\pjD8셭ob9àC.~"4tCn TmA=+J؏W9F6|U27ܠ R1'H(ֈ8FZhiX $U:6I*9??!>a*&vj~ƴ^ݚ@p[]yܶ* cHfc`7k{m#N6H1>|-J8JɌD*+ؘZD)>ovaVY-vy.'Hzo,bCѝqY.-9<3eR"̃@h CY&`%ع>%TKXqh4gw9S3y)g3 D\5mEx&dJAc|MߏiDħ(ݣT] AX! WꍳxMW|[<-.fZgxͼ^P8e 20M\S#*8ό0z^}I]ܫ_`pZ][_eyJtJ˭+)3?<$=߼ 43Ǖ|*z|;DptMf CAvc@(d*^T$_"<m&ME!Uo]-ע6 NW , Ao/+J=[0( p(^OٳZm@\`@yu\XA>>]B<>yngv/` 7Qoxd(IX•$Ng3y8L3-{|Sb8zLCdK AjdKMu5AKileHHB@}v[~MB8:ЄL~;V@3rih@-YuMׁfnCq{GÉ? }Ap\'#$$$Њvf ` f!j#֌2CO7!&,O}s3reDNTt&hS,LTBHQt?5sSqK|ejfdD1 ˬ}(u/ȶS&x39uc3<7GL^̼ǭRnSn, N z gN%T:'!N#QO'>E 5䴸_G"6'h肳b;*8;~Tōg\R`="V(@.UlR+EEpv= wΎ:PR6e#a(gͥP}6M "@LU<ntZWڠ8ևlխ} 0}T3fQ̩6#X:8 MxA[k O Ee/h  EN)͒6 ;\e%ѠQɪܠAO@G<-yt+e@=oO OШGHFs##H`# F2:ן#,ڐw-taXMw0<n~GctTkzGeɍO$tS)"!fJ: #TAEyraЕ Vj4PNKB)R(!b8\~[Oo˶ eKuʗ$^FP9:}u ]cn?w7'>A^tN[*'LkBR 6K2x)&8c.i3"!yjɉy$|DwT̃ҋ3Bʿ]:~?VqWlt<ܢK{8N-7A`΅ƒ[k:hǃ/dhqT0+UUV>Ъ#2Ϧd9%-2}ks7Ųlu^d͐.; ATu?Eg|2ÏiwUp;.9`^,AnlԛU@٦RXM-&uZŅ=097@y"FvXliJ/)?xZ[me BUÿ(9z7KozE/\VW(&UsB+j5SHDf[& ~&2 7>: zl=y A(c¦Xp&D8$"8% 1Z4I?,>kZ3| vN/C0_Dl8LpFpYH̾dx:uA\a˔ýx:Gӊ7eteޥC<8? Ay@7΂н[gD(]۠Xɗ܇X6DJ `٪fS}]+i_RZ a51Np.ZTD jwEt1@!FYX`0 ,Ō9 &R0cb~ PPGUaCT2{\ׂmUwHl;3mIh`>@kĽe(a"I0U7u` C{B/GO" 0Lg6Bq 'D%K,'6|­kԏn#8Ϩ@G3bz2yǣ½4Xp} d)AWdr&odu8 7> ĪMH"b$8袠5q 23%c*pbj/%UL'Xku Y+/hN1DYOjGX+娏Vc5wVcL:GQu j(>VG(pL{ ׂSWC9oŏ$ӪITv[׆ÝJ.Dw9Viα,hT;LhluKG|[eqT,fJ6}?Stv ξqYrL3g9rHMQ½@ i81RP(d"Dou-Zz-4YÂOMI.Ae8e5#RƉiAC;~(X<0O/3vE pm>ͧYu5vo:8%`wz#P9%\|ɟgspm ^Kd8XM`5 C`ro&ۻߙ317n^<  ^˺`Q] bY6X&h.{S*/W~{e8Oz!jP jwA6Lg!W_o>/N(0B+U j?XzDzwS&Rr|`Rdz @*2/4J%g(IVJ€`ySEeI+hN5VZ6~w~dFKvN{!Xbahy((m"3C5{!1V#ƇjTn3%3(I8(d]'aU'gP2Y}y5!xKF|8mb+O<ÆC7 &q'58gP575Ȕ Z {)ޖ|R؜ ,8aI`(8GSc>Zѿ12U56)dI&6<Ti| >&!5>NI6?_hԗn(F8!mD2$"CsN0Vz>ԧ_/ Lt@FRPe-t݇anB ) J/{yݥM!`NQŹJ4K  ufH_m~!q(jlºCޭ"CbVr0[7娹Ϻ¼eS-&<_,;VR=o};ٔ\${{x  ipv?nǍ" o݅cyEz cVd4mOm&nab^ n[.BakS<ԯ7@Tdi@rBv|^i@Z]Q@~Q7@ &l6^&yo2cOo(LulW×./{ãGp#M@SI*%IجNl@FWcVnEwˎNebl5[[Qx& ZhIfZ=z[+w\a1v[_{l\ǟe?]JhHLFSi); c"ے x?1-3yϻ!:/̔8#iP5#ghZ7!Pk%8IdΖ8g8^C ܑǢT~Q*|| >;+B C<伜̭1Vp-7];YibOX!GTf+5@ V:3߅OWE,x@FÅnjb-SA"J@\u%~I1-xDV\Iv].C]B7͹ߤn>, :>l @cht-vx<LJAmTob͇'/;))e$!JF`%j>0Q#-9WzR ^}! 1.kx&|YkNJʞKȈ/YWAƻ1Rپc}!/Vh^ !&BwӦ?e-g,. n\S$r)DpDdf_0ÐUS3Uf)Q.v!m|?Ŏ9Qs '|]Z@N|Ƥa-@m{Gsq *9Ñb5UQ-5Q}t*G>Vj#^ZW$ƽ `͆N,GXq<_^dãrH /(.! o: ]SP68,Rz#(*6^A Ʈp0`cF6k; 2cSm7nLdZ2US9|-1?Ytࣕ%y( i8¸ D2 xX|>0Zd 0kE{՟d͜ q- F6A=Xy͠GM#4YI7ۼy"kr[/𱻴Uvh8)3TT2` V/1Ω.zy{ QxÎ"3fBz+C7&"!44;.&pGMva<lF?/Uqp;-g^i,JPD`J֛b)c@x"8(j0j3(T smp5=9w]m^jA CH†z 8-IJMEJONRbby7$@nhEywW`[n (OJ7LXAip |jo1~(ÐծtZ]{y_1FVRm9|iN=qὊ$+QT,$WQ|?8}_E>V745a~/gx6qC&ۛw JVO/+zJ8έ‰@ hk;_<_ݯ2I953VLsp\zGc7(*rLH-2ђ0 '$jRIP?#"T5)n$p1ͻ[&P+`_z % li g1׼M0LJfu{ U3 U NMwKZ3>c*lFU^O6X+D*B0Uau* X=^g5J$"ǮMbʠb۫bʢ[A! 3keŘYL%ҀKlwX#'}-YɣcGXE>Rbt/(?B ^4Q,F<AU+ݪ;_;A  J&޶tmR}`޲C7%p&- jMeG H=#{x@FF0@j5:̃}"\%' Ua>lp|b+Wu h3N1ڐiĕ>pG|(`f.2 G?0AUQާ܌̍j>+tOoIUz]pq.pӻҽbÊ:k禳kݾ:0ό!V:9?==T}#߆nRE'zC) S1 +6cOh+8!SGHZ&-IHt:>8( xkP#U|!O,!VLDDa hk*#et喚&-~%)) |Hi lEԯrdUyU];Ğqw0t)!xr$o:CJњ CUzG.|v')s1e81S7IO)&jyyJ3ӢۀJ=64yVZY`!Vo㨅*aX'LE?8f, H uE[Qj4ds2 5>*7#z9M{9PLhץ@%<[: [`AnIr\lwL{e\D&߆k͕ӥ1c0r|>$WIe( R)t͛E@ޝnNS:ҭ,늯3 a,M'ٸ%W'ZjܷƣVಝV#r(⡕;糖߽~7Qx ޼0܏ˆ"ntI=EuZ? .u*bWE<_r.w&Bigb A|K(CkJR(2y{ނV[ag{؈v 1ă^뵷3FNq/S#︠(bݿwX!{orFck<xh}?}oٸj\+(bA?.|.ٜސ.L1LJM m=Cxaq=>[ԣb~>;;=/6;왺c;vWk9; ;vvԇ;$_;@{l1 29֊[MJRy[/}'Cz8$wK~䳷=13@b?ܓ{'OC^wX-،% {\A={!cm5jf4DSМ& "( nɱ6c/@7ihk<>]tc몿,҆.UNŶnc!㔻#ڷl2#t oZ0OR7s`cbX>Գ_[VnE4UDl65^70%T3wpH$|<KT'CAd[HEMξl8]NQ ֑sթc2B%Yo1{vX7C Gc(’DzWHP`"B$/uG}Z^vAwIRE1ދowom"6;llBZG1-IA&58_r'vSHD+[6[SaLq3%X .vSx֕2m/CǡAUtr(A0+fP^L fۀY)d{@A15eh:nfvpuWa@Pd4<*UΙrzוn]p82:M}4'AC! \1b1NXE[n^X碤Yڙ qOFN9zk^-vͯ#ښב_QK`q=Ua\*%9IPֱ,i$ڞq*=)4/R"AهS;V Ʉ0(w&H'AA\djP3ie,C(I ƙ6Er.*f̤N\ L{]zdԵ& 𼍒ON'Oy~5TmutJ~v}yn^{p ޙvcꓥ:YMw@yV$J-E,)Ғ3(2.M;h ĺ5L( :{ҕYnqJMTf(FА36!Yb֒8a4V )!xWZ!cw5ZbYEp(#opJ@l>esR=<>bԘR-=<>EXIf=qh@`j") ɺsP%-/"g?0(M4Ckm4Iã]t\H5w0Xx"ksxGa VS.aWW$ d28~u{ k >vG,W oQǰjO!xC X%|pz&Oɭ{qf B:ȬVWg~C0B"(AjTz^QVƭVBYI`|ws9]9f^%D ~ 簝逸VfSp}· E`Bpq>n[ԍ\cյx8݄,j+2ӘU&" : uGPq t28P#!D)0QЛu+,!W:ԺA=D|:!5Zizo(j{sBBf [i\#+ibҔڔ{jTbDiP0 q!wx! 4y"? ,L[59l#k(=w0dhBɘ )嬥f_L2-;D U4f1t oo,ђ:8[4(""=% *m>C_W.6FL.QPdz2 7ǏǔGs)"Yʆ\W۔S)"HltlV>kdzuwX;,7_N_uީ-˾;fnQbYW( k&p65X:B }iHK6̠(L%quPcWx7`%^TcQYm;D{e@cLBcژ)w?Ay.$b~˔'PcŹ q}q d}]!;ju-%"F&`7Ceg. WBi kT&Fs3řX)J5v9oL: "]w uD.ӌcfҵ!jԺ0`>x:XNt\=ٛ+a1hL0W*Fј! 4A&"q̍e'<jyWo ?H;O#'զg8$=^SP0(A1 ͈n?4ORcÂn/#nCء|h7p=Lǟx]H9SS']0?_TZRzZxeP!o/ċԁ+#Ofhsnm}N.,I# ,6vGPdb!3ѹgu'N̋Y=%ʬɟv+T7K8t5Coh,^%cj™&/O$G5ߥ{1p9fiH`'H{#St/vS1(K 3BDJDӢ .>%wR,cF۾ċ#,in!ov#zxL8zxCzzvq"){C6aủ1A㌰pMS"bA:œ1ؘ n4"ni;'"xJ`;G)hpg(6#@1g D"Aᢴa84ેԍF!;1P5Ao>e +)O:bWRNcM~g lep!O OP/( JfT0~1M}V!}rFxF+\e8 3)Bt^oݺ|It; ]N(Gsl:R:(3d>qKӘGƠ 6ѫB|H`0TM+0S$sڔ>UEqU`k ǂus9_A4g$X(! ! 2j- C!I&,ٰe:0GvT-Ƿ@~^eI}F WuCrx@`BUc3V@28c."> ,Lq^ :Vly*CYcc8A ;x +pWy2rMM|Oo S ʓA48v垐88e hA< \Q{ Ot8,&Y$?ycJ;OŽhe=wB8\|7 Lo:c41NRgE$An(Q0NdD)USr>TU5b+1d\Ysf5CI1ws1IR -W*2DȬHIt<Av>q8Cir"9Wwb;j\#dSk sGE3(xc?Ű֫|usEqzz!Uzl)ψbf*6HqcPdc.oY:Ư޵~s&X@T4O>&%ui(!U;G Ay%̭XBɘ ) rQYHNN%`ʩ'T" P:lEr~)<18T/1g~l 2zu!ϲ1XAaHpijt Rbƫ˻^|K=b{d_}BQrup-?9Ulc3a7#Qa~"~[Oņ,M߮e.R&D5yQF9"/Ezob< 쾂,2*t?n=,lVE wVaO+@2JI۟O!R_#Q"{g?> ,л[ΆF2NlAN\e(U^ ?miW'P0MgG,$GOfxR=Ӂbu&"NBW86\iQwS:!Vg'Uae>bby:ь4t|c[Lş:UDvX_Ƅd h}EoD߸[ii(u"@dE:Q(KcM<׃*[]h*q%MD$Άe"E G$IHnCvp,@E zB?i5Ju]Y `@)EW2o#V! $(8 4w%Bbf97>7maspР;xT?f_=mAw0Ԟ:8|4f0x S&F ;[#mH(_ \neDhˎ+GbLq̳ڕ v*כ$" *vrۮWɥYE0s<Ҍ`E I.WiGؒ2WBeb4Pb5|v8*Z>XQςנphtEY 95y:!B;KW5Mn.ia"Lĝ%EEwWJ@GC w&83P# r#iSoўR[ O` kߛHkB}o@ /wGKZHzg,/$͎|gQp7˗8(i9eYn<3Cy Rw2hxmJ|xgH?*F:>*2w3ݐMq:ϤsܴKRwz.Թ&}޻ʹ`10s~YXoGo7MFqȋ u?A!Hi!)dIDUǫ W⻬vpz#%6Zgc3FwjiQ9߮6:lõ~fTy2 nȼ‹p~Q*ݹkxqGsFS m#R-YlI|~'<iN`7:*^u>h˻i\y*o,BK'J)z,1cDx}%5CFy1_n%/n> eͫ_o! nn6ȧ| V@{%d$̽ xmޗro[{AÄ̸Z#BR !B vGm zt9s0sUàa"8H``J j$ z"ηv[owfX>9pL\?и|E5H.e+&ѥ ެx*JøLkH׾:qՋQT%NEN]'!:hU ce[%%xvG Z#Q_.fZŽyIr#.aei =`G7d( 뇄~G*ǕƇ?R0X~Mpk%chSpfp^jƠéPBRDVe. xؼf1h9]A>,JF("EŠͧ;cVwRAxև1`L4S Z[DB"6Fa!:U0ܕ[_hm>kX\ͼ}LaIUB-rw^:hjqy>nt/#PTA7OqW{Kݒe5޶dBEV y 4{шB `/h`CܺEf^E4~xKҥY-I!nY,[OPa̦Snb"% 6nkyYQ!Y"yfIJpD% TPdTȘG2ϣ!ǥj-sBHB4;GY{e0ƭ0#el+8%^BL@8C\Zh0q /O;&ߴ%3_9rM!~x5x9(,|klQ)gr[ubi͜b՝ߺr6_Mh9Qxz`zhz޺9yP It9iFGN ͌ާgk_Cgiܾ~]U7.إ0c7򏳟Z6\ HeäẎ.Mav7TeTε[Lr}L2*ق9p~NMZλ0,NdbzHo 5eSi\T+hr2w|̥pݭE>"x"mmEYv>TOP0A "}/!i׏*_33X?BYHe" _wS=g۶/ღ c8BrRҟpl2""ns^^|:=֓DN캛}O%7>i^d{-_Paf#=;_ݫ?|a.EPb󝶧pyժ S/Yy6/x0I:f=Ċծ &`9?CB@RG\@ ` `78FOK'xAE#k( CpA B4*N`ُJmGw[JLKDajiڕ4*#e2MFaJZ.ror|/<S3q)RV:^X[H@ q%MLAJA@a,@m^;Lb9~W ]u3 p@@owY>N%^lgl/J&̻o(X3Q8WDxs90Ӡ|bkmdp ?Ýs<;FTFW+R q&qzjШ a=e.ٹ3}%kLFaPEJ˼SE<X2 l̒*VzlQ;Q-_DyLGdUm2E=vMv.. m,yl2*@.;6gTL0"C9v!QyYkr+w:n`ez\JM ;pi}XMbRg cbIS!cR4H)R-WqUb|=T>El>W3;;aMSunQrht`$) 'yq$+Q@Guڀ0?'v/NI}H e9)c^e%eic!L|xqpjVLkЪt~:gɲy[hTGDaٹXDmp8bs Ȱo$ đ[忶n7lx2N7)W$9~+P򭹹,s[%QAdC\F:Nq<x!gCFep*`lNVuQPDfѸ: aW7Do,#+R ryO]vʀ&dXxM(ilX) U}H~X ,k+9ٜ[^ws3)AF7"`/=,H'CZ٧8ml$Uz3kzGhB !J~PR~VhiB.8Ac/|^yCG'K9 0tۍ&Wᬇ>Z( hJ'R9=L24,6s,f Vp:+VF@7^<Fep x%H BCZe^.ם~pc\mQ)z O \x@ TJ SDp%^&b IhĊb^^>>qXę-4*S9m^?΅u!4Yi-%]RƼ fkAeS#Cey j<NN-4*n1R3"F2h) NwY`X2=4hJf,3ua,cf8NYAו(p߮2f57G(>i(TƦ@LYqLT$;By>h1@ҴaAn2g6Sdz%vq3̊jf(+xOxJ&އyeD* OzSFepdk3KafopX^!\[hT2ehlᄕqż3taWU|z-4* x,<с:M8_ɺX6 K݋8l$:ǖwbc{ ۧ o' xEl@Yob/o%(勵`G>瘭Ш N;(dfEh$ϒ6EcR7adLgzw#]i[G{V1&#.e6"! _Ra{Fepm^"I ! ߳L. g]j/VWmǃk,ix da{*+go3)EA[gk sk&'q2h R=JR9th8 < YШ (ϕ93+&z )3cJSF&q壇e,z %COnR&dCH3'0N`Ls>ؽ#]li7Ѓ#__~19O]W_& "ec']cBJD ;mF~tAotY ؊UY4ݧWM>)J_$nNSBwx惎GW8NtJDtf/gx>s1͗b?xcS+-{J!{ JAxс' ţ wHk@ TV1$AP0ʙ:A~k)hpJp"/6H"z8o;]xuqyIt~0^#C7q;dy ͋՝N-wiv_[>ocOo.g\|z:&v2:߾>zx|,wo/ztK$_NΎM/w_?7?x]F (R(Z/J0G :dPNy!",@=O_p1ņT\ZST iׁ8<2cMגx$&9Gيఀ%F"Y%C1jc#sJDC:72Q~߾M?ӛMG )1@k]c9Ʃg޵0#N Ay[ɨƱ\$ Y&5$HQ-Ihv#Q$ht7>t74MF}|Ta GD!5>6gnTױH7} t]4L,نlD6>a9r,d%mI]\uKL"03Qv=-69<`l: C+~L_>jD-.:\ #s60Lg @O&.PXnSZFNZuvۇ4'Lg@kw 0X4ۂu S˧>]y gm䛖A8 6 ,XXq^_i3֒x=xÂx~f4V>oq$(X'Vq= >9">{NCJ5XҀn/u[Jq̷Xq +I,2\XlĠJHcmkdtGw:$9uMdQjb^,V5È;LX}9YuI[{H+Bѹ: 6kćUvD98ĆJO; &5-u-޾!rkWnNx yDyhBw z&:wt'\;Q…;qs"~Ń$=\ oǃ85uo4n*r_?]R7ó q](\Ѧ7c@y-滝!\ ¤'yVzޅ@: _o\c.uM3|G&y;"9zvK9鯭 Mzck%M0!F{8B^1QȚE˕Q`e-4%޻ceUq`ލ7Lxt$zzs/,C2M^px*dom$RyUF8$%;γ;dz޿ORJ|tnqiopRq'lHTڙpGEă'MsR0O:sigȘ`y$(# -8aisJS9o3 z͕ L(.O1&,艸 $K\pzS840*N](l >X7?gj% <k{vr0 4\)4UNCStݻ_<ٿ;/Ap{'['GI/CG`𵹷'vл\`+ kT52 +2 ? V+]ic !/cqϊa^H]wl1 u!Q7;a->װnsMu<_;EzOwWo'a $J˟.ZGS~ʆFxQzۿ E2`,^x%(ș?Dkw?] w2S9 LQ1HsU׵ J ōR#kCzK-D,MX/qA,Wv7!"IF1lA.:P\Cwz14hT]M1=G3]fry(+Ez73jc "짃ojIMyf<@*//a{ 2LyGŻ拳'W,L)lAc5pr&PI6fbZu4&"  x ԋJ8O+ U^N$3jE9kU$gO$UA / i!d*61ld@@ToErzbի'׾co>_nsuۜL aFxp7l nBW V7M9]gJ$*lmLo*JLN+Ni;c^Ųz{Vv/ ְ~և]qWP'$d>NcmpkO:GGt#튥d.,y[(Ι }U"&&|xl\^M0& 3ࣕXMd"[fsxjS&RۺkkXp2TˉKN,jHSN, ^GmG.6~ yq$Q(J:yg~v?lv?9K$}dz(%HoL*2))ezzT6,kMe; ,i qF+aܲ2jRT ۚZȥDuTتQ͵)Y[c-z%}WF4]x=go^I3QmhQJaueup:ޭ||g9T#_E.v7(9`6'j k3ٖ@CBr4Í*.oz.&LEi3@HfgG[u&4niɑ#ɺ`죌*M󮑙]Dr$/e$PD"JB:V~Ky;T7]l5^09#zjWleq_/ui3'i^2(e-&_WJSGx-:H 9,bgm%}ASa_ys ʺAUPVLPJLBQ!O+)Svn#u!{b8$Tl`J(Cr[n'W^x?J}cSۊ2c+Lݲ5K_`+f8ۊ²{ƓEfb:5`SM D$U;XtAdAHi]H?eG.zP G&h9f!0c&[uWm1Hl :=0ɘLz㤌R<1|3:>c YUHMп'$#y@"SӧOQL)L~fcs]P1ձ1R=B `ڀ}w;o_bҕ߉4V"o(JnF<4d¥x g9N{@;p7gn .qۀ3Q|<,Gp@C'763]{h#ϳOy#ܔU̟5NU~%&qNo?]}q,ILYM=R~*O./^NżS-*$} fLϧ*=[Z^y+igA0Aay9 g7s \=hzA/4V"B+Og jԧs> `'X~*~Z8 ZZ+,Ґ^SM-2\$ gcI/,fwf_FRˇ:Ï1Bjx4I}OW*1rы0Q=Vs0'khW@f;xY u9?Fbg NdrDE|RNl*gs)0:RmX L״,uέ dGl5h?2d9O 9Σ,CrK2T>_ͲG]{6wkO&jZ{lh6&yGڳ9G=GL'h"Fvͱԋ/ShjQIfR FSW-W~<g /{u]V+ҾUړ O;{+E5[SrfyXx+FOb"OdLz*IELb[i{1a優ƈ`n݋=yzd{B JsYܦeE3(8Y(@ܕp{ o>;\)}z-BP͆m3Sk&<:W:+ڪfT*hL7ifGI ϳg+4wij ML{hHˤZ&@CX)a4p %ʔFEF:lZ#Fz"GraCÜoCF_cz|Ӑ ƈRr | WO aA׊+?^[aطPDx%J',#$\S!F$^JoM-(]Z9gĒ;`/yPBxJxd: \f#KqBa %m(^j]\x@5EDctZ"U`RSr 3^|)318͎Nʁ,MY6oT9(Vx 8$°ְX sU <^aKuW׆B A0Js&y<b "«<2l-64ArDe 76JIqU`Ț m`*PՂ0MqÜoCPCiSHI=K-(^cKA25D 9E$$6JI>Xi(YeF"GI45]jZۆ焧-[)K[W"K}x1}ClLFծ͍O/;Y]{fᛜuJ'Cpڃ3(KELop1Zd^8żqj6$ʷ[n6E4){ J8Q)42U Rx*XͿI5wM4gTv4N'ӳ/0LկV(Uvv?üs5Nt7o3oKѺtKvC8ݤX7m~8NpvAo0<ȗ2 "Ϧ\8#??xBh z+5͟'8ʹ7.^Տ'?MfgN杇:G;{+߯rOMvuǯýk;:E_bT%gd^ˆZA,&LDj`$P9 Ys8l :%nU 0%D>[e5Kdž 3(Y&@"AQTInd1aJv;n?::^] Rqh$0NSG nbQ{MLbV:+>`o[$$o wy,/i púDF먬m3G?/" %+ h,)/qAIS2j +!.ca%%e5Yz<~vt q(kgOF?^on^5RXV뽣_V)>C@6|=<<֠ДtDJƃYD'0LTf_pJaZ.`oxw_bc\Zh[Ly"X$ulu s7q^gա;?uw^_ v%Zz]=Gۃݣ+<;;㛖?{^iuy J*n<\h4H0 9e4XS4QV7_B_{J /t )i@]Z @\Ԛ"\Ridw5K~7\>Q_R_\탽q:NƝd.\jU̓8P \"^\;K-ݸ\~ֹ4;Nrdxv]o8;?yݿO;%bW%>vIY %uK{كc ~b}7?2{7L_hQr^lt| gߡLFCK`5b s5<l|q/O0_ܿs7?IovEܮ}~ r̆&[_O:DxnEpQq8k睾oGS5Km Z-5. {siؘ?Pٳ*kjYy;fa*Ð}A/ q61Oa<<7˯3dG&X JXD{\쐳"o'=vܴ@1jc&@)EN@xA`(xil5i@qa+ʳk%|:X>5xQ]sgcś]̟b!o}&hTC'.t|^'5Vd.Y+ۢuA筇0v]/-s''y$L.%3h[/5#r] >mbK,<&xs0M)<\'?yE<`TS{~_G'BΣG[7~Ppd`!=hoj&f"VMNGvDհky;9 ѻI6iОqʮen;:м?_NU!둚=l ^mKw$! !g| `?:Yhc}?|rSk|}zY}=J˛tpvpK/ ?XצG'ϭG|;xعw'Yg՜[g<LTa*#%n@6̟OPdl|{?u_w{W_weȊ,,gqQ`{=)?}w=n8M b;nzo@p0ono8>?\U?9Wuں?&Ώu\S h.hͫ3oGVq>jjϧqhE_wP}p&-Bcih)h{WOCX}ܰp6#\kICAkńNƄэٻB*Ϸ縻{?Z xoJShl_{A^$ol 4PiR|Q` }4<ϬxyfJ$,z,c3/+5K ?κh9U6  )aZxܶ_ l77i ۛE3 $c{mOERؚg9<#R5V; .Bդ~cW3Ԇ!LdR[" ,O%p&6Fr,C3M糱=Mn| ,{VY8Ző"!^@~dkyVHHCI:2-~Ѱ+Yj{?fP )mA)∁(z<;᥷o} \8[0+r[<ݢrN~son͍GpC ?:XfgeСj%ʻ纝ദ@t/ ,1vedT܁~Y6=x/?OMr5N/GXsחg^lji,RQ{zn:Kbq}:۔e/BĴW`v>z/Lz5)sZؠ ".ݝˊF8ueTT΃eWSdň?E6<FEC:ޮUӣu%5q?~ ~s3* 3u9_֛0)U{\ePUaVެ,)kTD{7OKЊWip^g $uAb`C{V> IU&n2wNaa;ؼ~>Oo3juOcv{*FPc$H1̕_zx6bh̨QZKzu!wwͧ9[J>ѷɼk3[6WU˦>7gW"nsj +zƤ~: Z7Zw@ hM++>'a@3 uYXm dpF;o٠-{Wl)eW(UA#o]j}+-mA)`棣KPfg#`'v`Y߰lbJD7'Wi(YB-7SyXf%  HuBfYN+#=;0Pw8r`=yOZ<ţ~Ml$Gƶf9|Wq\ݗh6\~p#@8MH[vz6bZкp`BFhJu988IecČ JKI'#ce6P7&Lb&AG):D*MX20lxj0Sž TE)m\-trހ0kPc `F7f\r~a6(ءX;*E\ ۖk' R|IENfÖa(1ŷ*(tPL#"S [dfa` M h9y1^?'y\mVo-~2wMgYQ,F?ή2 Hꗂx/^)(m`x2Na nʺݾgVBF-(7ǐR'=ۖDEd"{` '=KsL3Qaq[&L,$688&t$iPyސnz 5 y`'#蠜mU#_8N9FM6حG ',z1TiP*؝z?4M%D |\:|∯lvʐi)--TS b;ˡCw8UCFd1ƴx0SۉI8an){3966\eU+0##@dHG\0 $0@i83cɳ(Kz}h 2C>Fp_׻| Sa#eۥNHòJy4f\F^I. hXp0`aN1VӿҬGou[C70DfFݼ˹$<LJ\\]խu)}FpiDZ5e6'' ?~gaИ)X/TBx1N|G:J$ 3lpHIJ0j)1NA}` !q?x&‚r@<ʌ~f8# \T$A6ˍsћȞ(,?U߰7ŚM1bs)<<geφ=g2.gϳ{6|φ=gllv='G=uYyb:. GY3{f}Ϭ=gŞYߓyf}Ϭ3{f}Ϭ=gY3{f}Ϭ=gY3{f}Ϭ=a'A}g>3{f}ϬiYϬ=Y3@GYcGW=y4<8zm琇eD87k?RkSُݺI`S>3@e{Ql?30%k @qy`󫉛'4x}#l1 >Jטj6,:OKp(?T2덀q 4zvy9V'y#_@[l;Mׂbv̧:+tșM 6^,T1!qddXR}n:NtyjX4^XA0/BM6^,^+3e0qO *DhUΏN"?}80}Q#se@ qX/P2V܉s*迅h]ԣyp@G,_\& ۻ!9HfdY)LY&J n 3D'r=ɛW^XK yZDmʪ (xRK({/ ޝ._QFH|^vX-|1$5M5wrkM[/]-T6蹢vS(-ĶU.LỎMkeEEy NM&y{./5KWa+7*^_@]'^}PL3 5J6&ckf5r{dJ;({ eL:|c 4U@S.1Qz Wnw\|͆=]YŇ-hVj:m ;BE5vUǹ1BҮCv;V=@\olCkwke#Er<^dx~Rr9 ǣe#pvPF}LsRDvq4؞*d6w7G}_v=ݟ0,C osbn!g% gYse[=]iaLE5=#.=]מ{yL).y pgUۓp~q8O+Y;@9N}v@Ms#!׶ɞ159~oHrN Dm_vS@QP ir/7ĊE=vxj0b#tp%T΃x,i̾TISqB&;\A48sx1g pS"|4g ,`q[H:#Dp~P@7?6erŶk'<:`0)=J1ẕIK 햌>CܓzR@χI=)'|z@<*<)9ГzI=)'dtw@AQ "()XOӤbF Sht38  2$Q),Cf$TE3$K= ݔgJ[:x(HG8ji\*:z$րT)wʓ1 2' fɔ]Rb$qI0guPaד!eޖ hSQ1 a&*6i1L(Q3qJƑF RRJ$?\D$mA̪pZkppm7β<;$⎡^3K<51H1&=9SMbQD$iΩƒdzXEmVk}tg0o-˧ C_34F-]h7@gn[-N+uƻ%ІRhN2N$f&41tRbd#ce6a쑘:&Lb&AG):D*m) c6<5)0+*Ji+~l!ch5A-$@ճ[@vFM,k).&c =GO-Wy;LNj^OXbK-·-òy8,Ut'b"M&1T0IgTQ Y! rRSfgGk2CbkB-f)*#<`,AĂjl%1ͤVI01( (*w[}t{nOA-+kMY fV8 v)"2HqdbcU$D ΘHp/"6K}-M]ILpC1bXbLcЮGD$J 64M̤&jV-ϭZT$Uh)]^Vqc<6uԚB] '~SN)7˂ޅwUwQU [gW^ۉӳկgdŖŲ&E{OmV&I[K<#XAS_߻iezT߷u[p&OhPϡmDڴ2rHRF 儝^7ZqNy8uE緧kҋtnw?" TS{UMhm}Jv%nlM^_0*z\HO:rU<_mLv f:$rU5^^G{hЦ5Mv_/O {HnߖPIK1uEyTwze;tO~%Q&wcxL:-I/NW9hǔձ(rvI7/bT.SdD& I H\Ж}>-m63$G6NJ%V ^&|02ҒfM$8XI+fT&9zHF$s\cF,ʗxl9^d%w}՝)W;~ܻ.ʩ,gZA]td[ Y*=e"H<1ʀ)>![yZdo4θDt0i]\U[3 Ov6C\@BkopDc&A-([R'[nѶَlGIY X v=gn(aKOK'K/ws#6S-@CBL"刉/F9ȉ)S)βCO\4-+ovS1"qQAjoeE:Va^.~_| 9' Z0 VOu WU |g۽}ɞ'LXE wҐ;glIP{JR7?Ke[R@XLBu rqcF&){J rdώӁgOgr@M5QL \O޽].,狏^ȟTu/hIĩv<`$avq1\m|aiߗ~1V0VR++鸼/cblߑdμh r/`XL+}?/|;Ӕ Li]cT&+eKcEZdlIA9_.{!q[죏{Wn+` azWV;z? nk`W[Ye7>_uqcyO𕶐-5u 4&-%< "Uo-iy}1&܍m}:kv-[lkںr zY>YY7?a^ۺe냆|?>rO ׵m~=Wڇo7|nB9캣|NPgl`V+}jAsHsW"{wYL;TAo>qǐ[ð79@-Sr聮m<; 9Ɛoc7 [C'jͫB_wK iJ,':NWcce Cd+ H$2axChtl>Gj 8XZbc"V+dsA {ѯex$dvd^߀T}E b2v͊[wrt]cyj|3hRR{tqD i%'캾4&ϧS6&jD̈́ CTfHˀ̶ >j.}XAlpeXw2]s2}{㴵-q6zz)^gyٔcbFDqhnzu5;іgT;&-~u~?a>ҍ}l_ \5ߎ-8}3\ۗ&x?52qc痄iQi<;-F14K6hG ߊ1,~!foOǀo[Vbb;x{QI=WRCEOCx^&_&[Ts *\4/GFZw0Tp#w}۳LjUkCL͐p>frt[c<9I94&L7{o٤{vҔც(*$P\u6_'k.[}m͑_k%r,w_)SݩgAEbUל~WKJ(qo (]Ŷm,}ljذпQzW7٨!b}[ nNjټ]@K*a-C[WN3ATu}tpW.Og=\ɸ1%ƃʔR֙~*%V/V=G3ra˙VƖz@}oY~F:EEad+)w+n>@-~y0~n'V)L _Y^~ocdLʾ$SA9zhP2+ qɠ?ҟozyH mH[gJSJo]";.^a)peG!fH&{ &:YԆ+ᕗAF>&z$7yAM$Z)TH]HۘdR;# qa"k >tzR6ctg{mVXC 3B )Nɥ eoM)ba2 #Z)ϑJJNiZí⡆\\:l)g)C+%hY "S=LZXpl-|a.O$ӘS eP>TH26Q:­ aES=\;LAs!G1vؐYs] ÚXu>c0g{D&^%9 0f%3bR.a )":HK8/]:w!21ڸ@Y\N3jrA%?1vI)7;Eū$,.8GBIp4bI֐@.[#4-aQk`Ю*YJa;ą3##3}]3q]Zi+v Ab!p#H F2Qhg7f6Pu1ZrSX YJhf! ё]H %z-& ʈӈ߂jeyF erHa-HΑEFG@kE2⩛qɄ%\jLT8S`yUEx`,*,@i÷2V%lW(hPT+ I NmV#J%hH4*D#Eh`#üѥ%NB KXhR = ! O0WK咤ViB*'uRW jv۶ K˛Dj'@,><T yIG) i o"1C A$ YX&X\*UYd%  y3 > Q:!&KA'D:[2)ؙFDD-ĭ,#2q,+XAB- HSN/ D\t0؋)h1fo +Ex@PSᑐje])P51%9Uc܃.PreL`񖔧j($+pJs@C'E0P )-@LY؀I4 3L1^dȏ&%m0Mdgc.fvE$زbEp|%ct l$jw0t@]I`;TնN,}pEƲ* kPTqZ%p1)Qv 1Aw +5/1H=B) E@> DLxͅL)v +4hX(= ɋq@zMv!XNlM40vK 頚:S4hd3 h@f\!4K6RTWo$"4ϝ‚ZZ7h@ #BLyx7(!"#Ő) tw 1\++ Bt\\PJfda.5TAd;Ef0 Z}(#KY:F-+Z;GCn2FL4=4fl wS"DLVeP.>"# 醅vٹDN(\pwzL3HW)eW+=~0G2 u5a7%Qͣ$ɮ*CKd<τXh@X(mz x"2) Îz [,PhqE ⤩YXϤ/|y!}ӏoto|he.k".,r(S׷vՙofjilem  l)ҶFpdmvNO@uæyP%4B^c$gbg+dG߭M{|}>_}^_.:M%1_f8Qv6r;3}U/+T=jNx  *2*u\Jq>/@pTtpr \Zq5B\A& +Lheл $\Z|2*e\W1n~˘8b$QQq7Z q"z:h *-y].S k\[Y 9[̛|\W˗?[C%~h:rqG*ccݽQLuLoe1NYg'3D.sSDǢx i1+e߯v3󷄺+xWWzl G݁4w4T"-c>SxQo^ V>.6=wkoeUЪʍu5y*l6=E$5N&"ɡZRPeMSc|6Rxә'G1P:MS͗qEW~˦}wd(\ WJ9~jWT4~\(mNiEWɛ?P7bigY/MY<M] Ex{f~濠M_nm|:Z'euUoe/c`ߞs,IU«ͣ"tWe+헟[EvOs\͡dx9\c̡W~COgVwI?/94~mnjwtu[uQj!Z6bQ(gnױئDo_g٬~#/dW2|<_aBx^|ZQʩ=Wzb\DNEP TE7?|1~^+LcOE>5_Ǥ,n{v/~MjXjoV_ahfW``\Q=&Շ&8_ {Ss߁>ϒ'B zq}Qmgfjm:nW1+P k 1V Qokzs& vۇ7Vg?P&BqZJ m%CtZ#m-T+*yQUPGR՟/˭t~ͅ&zNw>a}IDHm_W۷6ޛimh{꺨~ձ,6S|M2{3 UwkVbM".YX[{ƴg,bwGJT }Z\-*&E;YK<<*o3QssLTvz N>]==@o[=>}MpЂ[xel}UsY\afVb}MKo{ݍz쟇K8ƒql,9D}7lEhX=6xE-Ӄ!ja ?oa{}cgCŨQvcgwH}|a(CrZx$$ɻHx 73V^KʯE2~=PE;Qb|ܵq+KF*QK(Z*CS@mɛPLn\42JO 8j:BQ"u\EM'¸1'7|J^ eR%vPT쁫ȸڵ饕QVPTpjIWR ø!5BV 2B PmP T:(q{}q#++TOQU Ʃ(,!\8)+Tk@%d:%%\`G'BL2jb\ Wcĕs)JsW(+\Z/ea\W=d`)PBt+ձoW0ƃ ։TڤJ=z?Jw/hBFQ4U'@FZiTi8 #?r-ؽa\J8\Uk1AqOp8A8$zʵvP\T 2e>b{\_˸ڥejxkw?z=]TmB3F+`UtTpBI2BPGW2DqmTt.]4ȃ .x?hͮ~^YG{ XJp!I]ur~j@Sif&*-#!\`G(u^ 2y\(Wc$BxBBPnTpjq*9%, ;: dpjCZq{{xu8qBi-ex{Gr+(E ؟@0r>ڠ41=FLkuʗ ,Lƴony\??(V rPϖ |ZaIv؉߁֯έ.{(գFa 8*K&AzTke t|?ܲD'%8'`UOSk@OO -{J2vmz):"}\` J\iRƸ4JzBBCu\JW#9(% X AW(wˆz*m`\WV_!\@'Z8*B*+Ti8#:Br*KW KW֊WQe1ʫ(ImdJ2B:P'q5J\! B&zbɸףZtaKE((*FcUv"c$ XkJsv(8IA Pm+~Wj˦WP7%X/) ]S:_ʘ؊+Ÿڵ饑m/@p|G?R5"u\JW#ĕ2XKB2BARu*y\JcW#ĕ6^@W IW(W*B&dUqu\+p&+$+TkBB^0F+H4w \K&BC0Kew`\Wz)% n'9*BQ+PDd\WI(M`+DW( 6u\J+cU{qIJ=2!fjԈqxnJ@^3RR_u&RDmy~[9]ԧsDF ykBVzʉcqJ 88:{ŦC :OfR w!Wt@,VՇwzIkmȲadMA ؙq2cg7FX c#E"6)[^&դHimu]]uS!4s 3l2Y^P)UhQwt2Fl fTL #AsbʖN# lO틋|Z }I D0`D_3_|~bX@(NjTͰlr<4o^BYB3H *EU?$vtLҸoU;I7eѤ2 Ep^ewK(*FGP"< /nlvBP,FPu8ZݨFg5Jɧ? =~O<%uZD˩A䜅2()V=VTr" UKY[*gOW R]=C"Jb.ZDW`Jp i ]%*伣gHWT#i*5tJ5NW#3+QEtB+ːj ]%OuSa(EGWϑJ3;]%5A"gJ;zt%MA,x{*mOgJϝJJ;zt֍S!!6l@pNb3![D,L h%NӀR%MUe5ZIr`ږDUΝ t2o>ȹ(9?ުJ\eL{-T@&}P>Y }KAၱkT 6mRi2ViW޸T.fut'I\ӏLgs zsYh𲙥?:">\'n_ex.[cR;ĬW逰 9HysQ b`^z,7~ Mp ͳiFP8 h:͜nu!M M~8C}#O-N,[ $3x U_ͯq7YXZ(dКU0(۳YQ9++Li#p^Xw\U:" 㲕@vouR9- VFXQ 7`-͍ DZ2ڇs^%"EAds7փ獥O}8h;F5$jHʖO'C *d0Wna wepuȃ5LA4L,u*G45sgS)G޼q8rqN'sՐV_t]횎*O?d;3WVG0sA1hnB0GVs;NyP_5k>`|`/`HOo4="zzڴ\RM2>%so씻w y>Lq 2ϜuJ ẽvX qr*0vFu{F:Rb&y$ViǬQFYPA@@qySS?1 %nQ)ېJRBvvʫB{N ]M+|s֤gOBlO"̀>?"GղY@㻊]]>Gl`!oL ,in:B64@"+N1ϱBUW\ mYV&tRϕwq-͌.1nԁdWG\1p %y=4`Fއy6Xf;&Âָ)joYy\j/M@0FS+_}Uw[˖ h5an đ4eM [p =NàyVNr߶u[DpӛhM(A]ID8Eݦ%i<>e: 0E7ERP뙢vt+Ж^HJ`TtbnM@ZlrHm%\ 5t{"t_<,ޭj.4"'=*C_ w tr=B^TO5(Mtusoˁdi}Ef1Hݮ K3HXyu"4+iALLRBq e^64d{cvíf^Jmh;51 gtNdz"fwwi|.&2K@(G!=JsNjDS\s2 AH@jqhVv Q@a8JLEyCA JD%я]̼t#S{PR.gn]BOMwj^=p9hV)HgxÅB^+( an1A: p 3 [O2_bzl툞&:hnDټih%C\YՍP7V,fw.BtZ -?ٺ L[RVfO?  3hsKFDnqqk/w6jS d LS-~q!ucKD.rA?86-uC֎2AyVGgcI?z+|A\;C+? :;YU1kމwt1G\.~UsR!]#\{:U(Jh>wJ(e#]Q-XU7EWSutPr3+F8mRWU[ZϝJ;ztʼnҒR9++4m ]Z%3JP*Z53 \U[*egL(EGWϒ_U,QpIcZw+.B8]\ ~Vdq6>rK?CK~4)r@*VQ0GЭW h>{PR3Uh(|/ M9O^dgT9Nf˫l2z(T7>˰0FTJa7%qb>Uofj}37Nc_n1}\|QLnLSr;cNP[m ˬ0NzjJtՖb߽^r# $g\Kr1*3Y/\y+2A;W~{u5{}uZVcIcܦ1\[rt#d ec=fD*"U}VsRю{V=!sWwl~s0@WV=fE;)]`CW .mRvt*-EtM#E5thfNWR3+JEtΝ\BWV`utP2s6qvXኯcۦUTa^fQ10>+L&<<֌)p㧋SȱBdnfFy1̚JAP 5MY[!͡LKmOrZ7FRUVӍwһ7s9AՖw+Ϊd LqYSj cKCwP"Ayw(~x0yS{`l'.28 }(OnrdyY-:TTPJ ELPdT`'x`ᰡ ^)%Smow,6a{oI'?ʇc>-2#H4!q^Idx`VX a bsxXcV"P8Jڶ4$8OIc%¸?J*l Lrӝg- #ՊS!BB*CPKz1cȆ2b=6MV]wUTk5d7\);l)FΝ 9II cN-()$<`fBQkI0- n|Ögj 3r'4 -(:c !h 4#z"Y^ASm Bz,!hGIeZz(#xD F*`O Rh*NQtIy!~T7[SR :,,pN:L;‡ r `0&`zBbX@x`VʤCD8.Vj!ަ L$%JEC3(l N BAꔦ 9GFV'e] %ޗ5e%T0OA:UM{%2DDiy"J)ZـH tʰ WPmj>Z6!i8Q% >܍ўbB^"B1k&8ip# *Pȝa8I !L"HY{#.>m9@C>+;.9Pmi>jZ.]Us-EMp2 ȃZRH V *eN +]8Ѵ8{*ċ@C%AYZ{[{ãfP&A]K7 U&eSU_ }qGH 5Yr6e5$@dx*Fon {KȵqQ'sOYs`I%> ^{@#Bzv)]LAjz1΁6D. -r!0Zҍt ((BQ{CHUm.F(ZtBtX:"t K*3eWX6hRHcduCJג!hCt2%Ysڍj:LF,Di9DP?z Z{hw Ed, W7 4$1!dȳdD(!ck9C&ؚKd0]oVI[Y;A5$,Ne#H@)4I[Tm4XP`P&$bPP_jk٫)LCTj^4Eƃumg6:/qs~L<^9ڮL"/C7P7jthѣй9{ߎ"'BE㮩YcVYkA#y(Y-:p4vf c&\fU(9h7 JxKTlTbHz(GȍZ) "I%\0uF:( ,ACB&H r` oڪ0u4 C+ EJS!& trUpՎ 9 %I^2[071'JuY1&UCQ]l4^ցN,+~(]:IbЏAT48vTvV댑gj $AcMZ B&T?3NF@&CP ZPB欭OB?uNu~9(JS>>jި L FsH5Ai  &`<0`iAqP/ZVF՟ N F9L@9񡒑8AN=ZMHOQP }J*M\ГFn57a[>kk5]z۞ECql=%$38K^rpE럿;KP_ePb?~(CPb?~(CPb?~(CPb?~(CPb?~(CPb?~(Cl?CCp8~(klg}s3^ywPb?~(CPb?~(CPb?~(CPb?~(CPb?~(CPb?~(C}~(h CCAK}8~(`P/A|~>Cz~(h~(CPb?~(CPb?~(CPb?~(CPb?~(CPb?~(CPb?ԋCEzNġLzχz~(#I(~(CPb?~(CPb?~(CPb?~(CPb?~(CPb?~(CPb?^@[P/W?~kAݵlmߘՏֲܸ;V錷dߒT85`[goB!}%طt=9w@pSW={ZpQdzpeI*sHxzUW=ʝpr R%(q@ptp*aWHgϮz%•TviSO.TC컋;<^.:l&г矾fvXm>ZК'whj4 VZ.M^p@0#Á:V٣%aG30r`:X? V*W=\kz;\(WW]o֘g0gw==1\/puhIE+{ WApW _/xMΞU_],4ۻ:WN[qv624xa`jC*V GwMKw GDUkI*wS]UAh !VF):z=IioRrY"e(QikR)f4 Z6dnzh̺M"BhR)Т;;!ewƻ꿟/ڮ{.ye?9^_}a ֆѐ7vZUŲrKK*MB#hH!]MXzLƩ.=ZnuϛWcu~yԻnӨ_G6CC븜}8^EbG2xqk&شuq*N$9Os]rMo<6!{V;h~gcwsT.^v_m]8;)kl2:9XD酨ogKV;}6y|afUWmbL,޾ V𵍳vJW>j{3G{Gz?mmCm |gq5-~3q?[NjV7߫zf߭z[uoٻz ?_/zm.\/5l_}y޻Ż'k˚uC$ety9N`s/isBû6MR^'5ZyqG] } MFp3ohd9!ɿKȭ\7l)_(P_TUѢ  0=:=P{)pKu^/ '&L o%W~P?̚m?yo6=Yd282.'NGBuv%sc)eƳxƃa󦮆%<$Wƅ&uL vo[49{[?6}1 btٱiǜ ';99=0N[l'BohDO.N'ܼ4l1yfޔ_ HjC &0$JahbTr3)H:x>4-"YSɭO-14)c5IGtV MCgWפKygz۾\m$p¾Ͷ˼Pᙉgbd||Ot0'sz;*{=(ҐbRh3, Lf<9\B5ŝ QQWѧ^p5n7̧xfiݰk~oM*OtօٵoRt0&~ʼsbR}g ?9ÿ|<+umn^nڑmrdueۏ2N3sJpwfHGF"~>a~ "WKX' ^r{Gh;*~v%=O\IbrV;՚)Ej$ՐP Q{MJcE'UXO]8aC<qMל9h.ˮ9j:u8(hZǧr|zK 摟o]S,޼KP|4TF}Y[c?CGޤb[P :('[U! P)nsO}#YwN($:|ZE궢x:*ЋrхS[Y&PPյc`gGwLK{C 5NnkxUFq#NRi.'.L:bƕ9PB"-1ibxҬL郛O>GzϳI#zуOŐ]H^nU'8 ;}?J/,6/RG \A?N6_.򷰚pޣ~A=B(֍8'b#x}k :TMip r:cWORtlxpɧ^u:" ,11 XuD1&Qڭ'WT70[}ɯ`<袚F6A^pȻ(6c#_3r)*0(P JRTEχw FC5}{PM/1xWtLfue Ҝg˄ 7&6DQP(u.;D~2,1lU1˳F:КNwN!Kb,4eFWH^˻;&}"N 12%Ka/6Gm~βDH Z`5%фDJLQ/堥̩(#YD?5:W2C/EZ:q=(7(aP9A*RKw|S֍Kݫ$eSIHHs BeL`h8e)؁rrt6ޓ}7[Ē!Jˁ[Ѽ߀\į;oj_U]~$v7G?("$t7$1N6l?zdRIsʑs$SXj\}bj tBlvQz-,K7ywj6U,$MrIi9lfT)E2JYֲ9ـp{z& /BaNv8oz]9lkW~}ğ-C]7ށ]"GO`PGtإL8kۓ}`q%,::H.rXR%O[XMtDpï+ X:\T\=ApB`\iZM{WR3*l\WH-WH%'zpIV j>]]9g6I .MGλwωZw1"Èg4]fL.M,qgw3O`^PE$OYc; X.s7捽(:"[FPZCBCS\%'_ȻQv:qd7m&)z&g5p6\pkTp ]#!WQ́#?),ŏu(FKtw&}cy9EBT"n޾ ^-2(ĿT( ~(r Z:oa~i8Ը1¬pėRŜ~[W&H};Sܫ_`}zA:_aNu1S\0u⠏2`}Nm`E>Ҥkz]@~2„a Fdlg߫+zW`他BKaրZx?X/nn' voscE)MZH|h:" hL\$M=rHe-q2q$'B6ۮ1w.61Ɗʲjt]UÛ)K=7v3]殆=-} b6&X/voѭ&IwwmqXӫsXh?Nw賵@]5 ;jX]-wnM5qAt5o[s7sNmt^t.ktbgb)x0WyTц'CmE#mw;==w 4iϻV "3*6vq:稶%5dhO}pO! gTu fkSx$[gy)q*D5NUĤM;Dv8뾋w UR|I5sw'V4eDIԺe3I;T}H׫K>:ѤH\Oi9()9(DZA=Tְz:Q,MdT;WK[OtӕB%+ev^C-zϗVLٓGjY0M)&FT ѣø,Z.޽ߜZv+{ڳ{a>W_bxi߱ځ+v@/3@Q\%x~=H4E(E/@2gA?ÿ$Zs*:B{5vpIV5iF0*";n0Ųrub/KˆMY6%kq<%#R%ԪϿTj}䞠%g xrڂ+$Hm$J}r<}=pN0ytڊ` <.\mGcvFqj;*́ vzF ք \! \!:\!5? \q…>& ~u-Br=Bjt T'zp%<"xhWHWHW@% \IJ"e{p \!h+?VvqW{`ŏ\s4+C@ ')•DJQM8BrPKkWH%'zpIʡa!'R=!:RoծSO䷍|LH_nO8x~|+E2ePl>A=J'Du(a c~Nq`JLj>CoS o2y6wI*ά kS" P4fIQj2Sy}OA+ #B8EAQݡ#ʋv\99bj52GrE͋ 䇁yۭp(/Q5fzz;SZMp^rCtS_ᦗ,t\t6(۹=^ۡ۰UUs 44(Y5T'd:$6xWY3Z{}US2釣^Wt@Ft-dphs;Jf|_Z&145 q 1PK3;%_FD6]a:$|W3{6I3vU΀\ΟUEX/y񒼡:Z0JK{ P!#^`:J;Q AErzөº.2n.qEE{F ޅl+v&%1Fgev7 a"u؍ ƧHo1ݍ ʹH^Z¤_oz}UqT}aAwny} o7jGbQf<.;zX.P=&"ԬXESo(P}gGwSӫR&"My+:hc]׼b6 |?p_.E$EE5́B2Ypo&y4Fb Ѝ>yzϳ2L#B6q/z0bQb5EB.Tv׭5βy,j9XRoa5Gr{([qN*'bGr`SbHsuƤAEStlxpɧ^u:" ,11 XuD6%Di\eDf/= <]T3Ȧ0E.ȗr\zw ̴Rakw FhS$-EX?w[PM/1xWttd_1E0q@ /AA`$LlҹxQo`}V]f&&>&/e ӥ:حK}cbn)+'۟.ik$Bi82<([H^˻;&}"N 12%Ka/6:G'6č?gV]T- jbX:%фDJLQ$IRAKcSQF4fI~hktd^8Dtz"gXF4PZ][|7eݸԽOXV9aM jS+#M`z#i&1ec cߊ?wxOݸnmkK(ih.roEW䬩,ݼC U.px;J_h[D(qJqwqbvp}/M:vnwpqɩOyT# 3ıшr5!D=pt(j,Rg2DO"k/%8%b"XZYjH4 GZNYjS@'^e`W˭>gj7,=Xa?b@ 9ZHQNq LsШy!S^x[K8գ)N}ze/f.Zo{0 JRѡ5DYKjҕ5g2ijdQUCRbY|:n^ ᙊ̮U~Fŏ^¦Ѓpn@$t nRu'l_̻E >6w)7I ؗsU<9 3@-G !Q){O%zrRR HtF]%r ꊺJ2|*Q/=#u7z=@kC0AqrqXu8j8zRWJj۩ǘ`qoݹl|xf_ kܽ@K{5ܨ g5Ñͥ#p *BRh"؉ȃf ȈD m+*J]^$;j!WӉc0sͭu>T戢ѭU^lb~.>yS VTWb/63c}ebzRo*TVP>}eCTjR9XtR(7p}q42ysZy;s h|$y t(Y&@ka@{gc]S9P15.‹>TyT!u*[cd ̯*NV@Ke78Le,+TG`M(ҠܟC$"8E.')XOܧ hp&)paG׃l t^ N[lxM ɕZ9f}cRFJt>OZm+k%gM׊ t Ϊamjj"g0_/?J9Æ`\ +``Ķ ;'RPJQ(>]̟W0_bH%Fm"1po7,tk1.]I*'1)I&8wP.;חW)בYxEGy4(Eb˦+Ɨ `VNu8%" qXJ#1HA(zK|ހ̼AdQ*&,1xNwuBk{'S~ˠ?r`V)HgxÅB^+( an1A@t 3V+A"\~Ⱦ+cxbE V.e?͑܏wwۆpwDܶ[6Khqe-;NE,p}5+#kW>b eÞ6^;{$K<: w8?|~?lJapJ#Hm <׀$"c?iQn6: 3P|)!Ձ J!IIRڕTD-?TDD}KLjzp@n&mzy}Ki~Uw;SSz辌FMc;r 5g?ڦ4ΨkV=zV>cup3.`-KZGK!xL8P=K3*FY^1TJXu̺iQ 섥óW?)t<_8~,~mMhaQdP2,MJn\G[M\[@=Z=sL(QmGs+:2,"1rbrX+/B"99in!x^gNhf/]:?=1"j?&6s?__X¡F>>DKt;J&d 3EV9  }:ľAKN}2qT8. Fkq !JSe@%DVA,Rg2Dr! 25a 1Y,-,5$kF tlru\gb&Vi3Pfcr5.\9W_2dM9c`G'pX@l+sQ(6BbG "5z;&"GSVO}2ؗWb޿9Cwȹe&v9gmQix`,(pp VQrfsKŵ隕z1|oϋJg!H||6M.S=LN} 1\C2D 0M ۫7T.dFYŋUy56¼{k#UiqIa-UYj:z,z46 PpH_]jOUV~W.:}U}wrm6Z'+OMMgec(|~D Qw%r jɜ$j[P愡ͦ@_@K/#~gPewTŻQў@dS4blLRZ2–0 [|nE[n2Wd \';,8bCѷ<-U7}Ҍrb^X@o1"_k #^)Y'$1 jPf E҇+a*%# 3K":EM58Gb KQ#{=K0Po^Xº "hVUTs:B64@"O(L'`t = pznߗiDv]#]AڦO|ksPUpax1jFT U\^f)Py>.RsxaloWv v{rREikmt-{S VTOaRO}>2vy1]+ϱB*LWtc.g,V&bɏI]s}q42xsZy;s h|$y t-OQpM,^:9P15.‹>Tyw)0Ju1z2urOW'+d)$Xväc]5tKr,{ӻRP.`>edF"2b(…]K0y}( *:mq[6O*,'WjfI:wk~j8ᥚRYIlG%_S.õ3jj)j|94xZj@S`|_:8 %Om/*a^~J \ އ )\+``Ķ;'RPJQ(>]̟W0_bH%Fm"1pYY&d1Z(gΛC䮍L4o'.qc߭@p__~\`_Gf  zܣsR7+3S|JK8AHjq0NRQa8JLEyC JD%Fo@f^cH|urOjkm# zzM``wgIf LE=%)e;ȿ&EIITԣŐfuw=νu9\QsvWk39wY~0O.IsIehNo2eLz)@#9F+5I..}W[ D;s(g:{k7u0?ymX^!U]pV1Z3%XdFhoʽvW*-\t[Uxo$ۃ]I@..TXmD|''N@H@1q8T֗wi8R__2~UѯXJ+qy.hyah)Bx*b]̤؆G(/4ֳ7*ʪ(뾮+]] +| t.i i)W8`G^FY-`W#=@18~R.I 2@-[>:^ݱ=撞r $iM BVP 0bbIxMLBрLIa"NJZ{ YVIsَR!񷾎=,Գ,ϟ ܶYhwB=A=m%D_6"% \t"AQOЌz[qݳ;To$3Pl4)TBΨ2\a+j%h)KHɸa]4K'Td40Ϭ1&dVܵvg͹ĵ8 =%oHj%p2žy~v:1>ٱdv{{\V'cAAr!'" wD15JV RAjg HPlCX*-=C9KֲR2b.R[FqM͝ ! b2qlZpA .Sϴgg̅APg͹8|yZgSu>~@5qK y\>5OeW@ҫR"X".7Br[t8Z14 "F FYo'wޞؕ+{Lͭ-ܑ=9;f_5[ f2gև5~m Zmbk%2O5{Yb֛=z ؉Z|HJx] d:c'WLZ0^1"cv1;dӪ),RE!2NwNEɕh湷,`Pԭָ{n!ݭ'.T!mfei vCv\k0`1%sopAԌ>, 83|1ֺ х6!CAei~Dݺr5۠dtc]3CJA*ܚkz2nWRuˌK']RJKnq*tLYtg4(Yژ~)DR-I5C!6_DOKS]}3:h{Mͳ>-*E5C A[8s˸rzm xɾ`*)͙JY kI?6VXIuSe תNçR1UwAQ( x! [te!jxrAhF5gqmF,p++%Bt|f'$ |&+7.P-ƝcD{ih LjcS. nfMUEx# :dp]&[ԧH 7:;-OevwʍSW|%N`xA"F%;%qlMԷV?81xߞqkwxKT|6R5Ӯ93SX9SX9SX9SX9I! _UUW_(w<0^4X&g } p#rOY'gK?X~}vuwVU-I |@ {'3G9CH iN 0bxFW{yj $ZK%>K݀p`*G܍Θ³e8݌']h`B/S,l7apًSSR6CI7x\Y p/#i}9sU}գ??W}bc2U~p}FkxΦjetRjRZ!~ ZJͭN +vՑVi{iB;-p=DtAF˻(w^!'"LŪ1p]þ-Ho=(Aa w#.Ĉs7|f1b[=ax5v4kҡo1%Fz@Zr?P t$@x-\r{LE]/ uXpusIkQ&!CtR+eB\1$PgLLBрLIa"NJZ{ YVIsَ>Hɬ񷾎=, ,ϟ ܶYhwB=A=́˭Ar]e,A`Np0 bk|= 4k434A"TZ0)z, .D%s ie9e]q&ڭ ӈTt4Xr`e6j- Wqg#J0N9wOqN<-ʹM+z_kT{O}~DN=}ٮFw\Ȱ;:#{p؂`p"[bNiD!AN=Ռ wC!d+19;^ok|2m8+m2*m+mBG@*A4hEC [+\h㥵Z?uxi:zڸ% Ui5쫴JAaWxp !J+m+m+m_"Z]+FhmWj]_VkUv5ڮFhmWfZS5ڮF۵WH3iv,)\6@7( EqeZzu Uj]v5ڮFhmn}WWj^]ͫyu5ռ sFr%V媲\UrUY*Ue,W媲cpo(Pdr£wĒqaM"+GDr9;co%aq* 6 U@F i"e:-TTN_'0[Vh2D4ǔ02"y4ҥYǀt"i>kH$ɥxĝ'Nj^'0Ӕ ئG~A"Gx }]ć/g)QD_͟XvEݣT9;{tL]RZ7%UQqMiv_u!KP: -nvSz*&n:8]=甖ꖃ5K"/ _ |:;m_Ke`]cazO9uy6r?(G0n{p{# YM35>Խןp.`NS^)V]wiGY1R:Dտ< 3͸V95\Z:/6tJJ IRsQa2-bJei4r![Hp`<ֱplg!R-Wrol h+j5򵪘Sk0$GF]w޵΋ dv T7ȸR*XgvFZeXx:cwXN-rښG._$nE1yEo(- '*WQʺpBJ2Z_Z*iU4)ݴݘd-TXcYj {/R\\כ*lխď2Od }zIA/Gw/BS&|I9/kF9fu-ud U^QnpI*UWUV^U`ʱ [Nh\T suRo+yTBtHݧ+Dh]+-O3%Jc0htJ^b}wi^̞gO}[ݎ:<FۢDoݫ~qf?xD|xɅnۊ\6ͅns\6ͅnW.ڙY.\6\6ͅns\6᥏aZ Bm.t Bm& :0m.t.t Bm.t Bm.t Bm.t-\6ͅns\6-I(0p1תTbh-Q]C1;cyW}<yɩ=/RK),&:N PE#/UጒE{985ou#zs~h|9fa1k~فHu.\F姼w+,@[T.Z9Qji\M !DYHt|UXoUR~|'['Xc*롙1vYt]`0NLt-nYimIiiLlJڠ/=ZRAlZI]Xm!E)/4TVI*T%TTQ8*yp4:DwT(c`FcH[@Y JXӊUhHs1+#Zyk dJF@k/NR68wINλⱡN1Pk·]o_|D ZztKN K`JL%f^DU U]❩.'Tu_>//i&Meb7Kmod8ƭ}3~S t maXRB,N{UX^r^E.39#ڲzVY "vaXب*hKН/Ju);KY p f dTzMf,'dK|"Ez@4@@.>7R ly4G?$h(3>Fi[gۚ/1%wU"l9@,<>:uVzxĞ;iCdea5iƍ́@\D\Z:i/mRܱ.:Jk)4.ܠ )ҰF`d|K 5e׎תbƗN!(Mz jIk繨J@u+;ujheM,[Gsk`ښ"X)H΋DK-sW!) & *&{^C`)q@yMސh2拓w8a@{@Rշo0ye7ʖm% 'kƫ(e]8om!%UE/J-*xMCnCnCRj*E 5{/R\[\[` Vqyu{>pŷѝK .6^Ӭ87]vw7Ǭ 9 "TUTRU!Z纞 PjRUYg`2vaU[&YFM9{3u@ W=_n_ײG<3r!vfV1~EW/:w |ӗߊ 75oHz =\30[?Z_cV8h?o>|Pr' V,<5#t3OYGv=#ڝʳ"2Y2PWt/䥉,Rh$ERb ZyQZ-Ƣaz֣܀IuZ hOWultE;ptJκEWlb;LVtµ.ٮvT 3F+E*theT<ҕZ1]!`+뒡+@* Q jJJʸKu& ERRH])|i+ku*th5]+Dj'JKCH]W1h3]+D)T+#)=jd dt(|3]]Yf:+B\0Uawrug:,!F6#ZDw0wa{f4%Ue:tpMQDiq?py*+8VԶVkVCa%n_x=Lh]`D2tpL]+DiU+aeBtKG]!\S+D+;OWRLW;HWB[сz+&upڽZ\ Qv-V2F ?WŪ+Xn\25OVR<{S)jG6r+J*C*!h!(@ rb t J'(b2]NWC/zq@`|5f.u[PJ1u%V+CϜB%DWx46B])t(ttŝe%DWS ]!\R+D:BLWCW2݁zv*%c "ZNWk~6BW>Z +BBWv[VB󳫝+͘,X=a/$) О v'f"SCv[^i&etQmt/Hb!VZa0UmOKGs\h*4hDQJizi'nZdhk̀ :;Z^xu+9&`Ŭ;91N$|/#{|9W/ +-/dbW%UR8||O}h xTF|`9rc],ڙj';.n3ZG{;{֍HDzT3S§X fvf6`ߵ %0?y3a/F4Uvgd-hDe0AK.k+԰9 /gj+B&U<\hWs]%ax JX%(WVXqvBo 'K*؟OC8T];IX[FUYE[P(֡ N\(ic.N`X>Q5•jaIk-ن95xt6!~O_`FH'0?bH>N˦!?&~8:<%iNG 9$ e_9+FxJ~=<3|,rF[{fב{do]_c apA?|[fE17}\߫`MA מSxWóga6_I_~e9!~WwJsF:>,?, Qv EcFxdY^R-Ǵ{KW"bkx_ nbzm C2Z'ASh7|httfJUKץ N}=K!P#T~SFu 暟wߦ+򭗗;i8zl ;lSrޅPm<Ǒ1*N~LKnRΚ<;.ד=[pw{8 !pu?2!tO/f|tQc2X"Vҗ|e r%1Y/ G_|7 4*!\H(*g>SW_V]/}KHx!j!F[gh,EK4 [|LsQ2>U܉}w|KFS^H4bQBlPfbBh^%e|!$հȕ j]Qm$"*NuɏxQ {Z؎S;m^)uAߎ[yN״x kK<@ZA0 "x# &XkV R/՚ROmb]T_$с>iB80&N4Z7Hgp2:f(H7~z4Ҹw|9Col]ňmގ.曥ɋA~\ѫ|Fl Ym+(|It"ĭ˸Iآ# ˔u]YGF;+dD~pش^t1Χs7(t:\vwDöթg^:v>Ph)TQ6U7n,jx+Ci]Χ8L1?G8nm06:8~lYͲ{'nvPcYy6qqnn[[=^ݹ7bF?Qm0j-5_Kɭjܞ@[Kvp=t-:/Am6IW5sT-E[cD=[`x!+G*H|8Ӥ&6J4>FGυV()5XC#w&@)Q P)1Zа{QvYR~\Ͱ`zr\i-G;x62=Qd ,-<[Dvg۸}8*?5ຠodX g+H¿āiq]#"KE+ D%MЉJe@FԌIDMX+gVֲ.e 빱1$˱& O5Zks gJ*,6q.XBT+ڤG^qfN#:1 YP &wJN>'>gbo>X&Nʂ疦BS:@/p\JLC)S|)ʘwZh}v+~qn=ϛzFǛxQ|x9+CEHZ%! .)vo}~6GW9Q^]l';t{mdC \9S,ZiDɣ*n)2aAbpE۸0M2x"SJ *{5 I FTFDP $aK >VG*b{;O;MXв\;ĤFcY QȒ0jNX~{#OH9< V8)l@(}:m$q(Ah^E:3QŤ|ScX ,ZHpgb>SxM9&kL㳿-u.qUP8zqNwq"Sd"}4~7 ;r;ViC秩gԬEիFJg%P|뻒/sʮjqM9d^_R7Mq́E6YwHsF2g3TVG1 cMsγL9;>\MsKt"f3mbD9!'ಝz9!$ dtۯ%wH,8 6 5"V;_cQ ΃;Z7NTs9ϫc0-TuB*RѣjljeP.6~Z܏jR1 h M. Z^8l!)a -jI|i b$A%.8!€!/oDPkbLJљBKH$.PKH!qL(IaM"JÒj}$Atmhx^@fo^(rGhjwh_*X@~]a(~'sxe?no龺AVN+qG*WO!4sW506"ol)xYAT;u9e"hai)<1)sx`NE}(+(?ng .4+GxAEr(jP$ՔH.K+Ͼj7K۪ݷ94 Fc1,/f)+2Wbǡ]"$3j/[yN4͚+N}hN5S>1GZ!JeNL tr΄NJ̓40G3 9q:."ImrKZ߿L"1燘ξ5iG7Fpm3E5Agₕ,Y 14`  8cgAi.I]rG6]wGD>`&t6cchΈz T$(]Aផ^Ŷd9;CsN܃ߟzMÛkB[tq,v׺wM}?ٴ4ؖE5<4?DK9Y51PXTct̒a2%2zmd_2 h1Bn?B߾PKs2^qDnd˾BIyX*Ø3$JˣKQKc1cs1'1667gPtdQ3#j)&|>QL~ }m(h3n#=6xF '3Clkqƶ>5XlNka˖'F0bTJ*XhhM`2)Kh3%yMkb=`}|"cIa əi0dSL{n`5PE&@`VYkth!Ѡ˰3ҫ 3:S"eBi2)K<=h A@ka&h&4ˊ#l|\ (3TjC ],dp-C^m {AicmMshu ,d] Dqah a0yg\Hnvg%aH GAP"XMiݚ 膽Z갆:TkZ'w/CAQ|*֖sA95Nx X?Pj U[2Ԁ\N`6ph C^!$|'ټ&5^G`OɯJ+{X;N͐jPo]i?@6/[N@C&X!@YQP\#4Oa DܜiϧԦ𭵺ƒ]5  .0 ֎Iy@C\ZhPl-KBa@1%!D Q r6љ|6@@ fetM=5T gec(@XhʉP VY2Qhl BR! 8FGUWU]% ٷP$:+:ʫ#lF7KJޢ5ض( ݚ` dw!*Db"Ѡ ҈=>4Vcf#*, "OA{#7ޏbE]nMhNOcE5#&Z!pB\*y_طWY ޫCoXXl7 L0c63eV&> :p2Hm.%L56.2C6a1'4rUBWE{0z-4`2:U=h t "h] 1|Hd 7x4; cU,ԀGwfFFbݱ"xe;&{k_6H jҗ a)&{֝0cͦY,*VlBȈ ǡA]Xjk>B^s` ѨLl9wKbib#ܬ~򄯁D0ePpή%BܶK%[1FcGY$QR(5nW%$-e^U3rA`X?Q6J σ/$LB")k@cq,CJ35Fi -H?;~OJ`Fn q4PF̃wqӷݮ^/oܦzuy1973),d ClɹOKAh;1Z)6TyǴaxD 5x 30xT],kP~x"mLu1|ǀ61x߰re&w kծU5}V(rdD[AL6@Rm͇TC]Z]-{PiRv|Zwfd=ؽQf @ R_}^`79}q+)n.[{lwwHs9Qiw{I/?qOw/"ᠷ|85>Wg~k]|a=g+|{5WIW?ͫcvgy(ݼy0~vwzr xs׻3tR~L탅M3T =: =.fb#t|W24 tI`?]Κ$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I My@TRv@3$B[! 䕋:$˚$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I Mn(yoÖ@$G5l& }(A7&N1 Q@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 IM圽Po& 6$1{(wbH{I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$$~GrTxxx@Ww#B 6'\6f;% ..%ip$K6&! r lLR~=eJW'HWrx l8xBW6=] DJW'HW:J ]r)Pw]) jl^Ƨ^!< >cf? ٽ߿e[;Fs3vog^.=*m0д16c䢱JӧHy,]pyؐui+t%h;] ʐ4t>s =}]Ϡ޳; nKWeLF; EW JW_:s yCt EWVJ~mx6nNO͆Ij5m ] Zc+AYʓu7DWv̠% ] ڐeJJW'HWD] 'pWkѕ ٻ;]e6tutS'+kѕ}א6V;>t(]]ӆ 3fJ& ]vtut6DW6n[+A+Azg*Yoi~vΞ|Eb]lOp-ׯo.ܔ\z{})A^(܌`bŷ~͇'QJBU~~ծ}h??`ٵos1!g+i{ci8qttSoY!1mnfNW+-+6CWi+t%h=vh ҕڰ!ns?[0;]3tutEQ^!ln[+A\P&{WHWE^mJl ڜ[ݻ:E 0m ܧxJt%hΠI*&G(Ѓ;v7w%hځ25HWIS2{C;(oY p)6D84ZBӂ &i:g]0y ] \ ] ڐeJWJ9ɔ |tg?{>{쟉CGFWJJW_:6J'9BW̱ӕtuta2+O8n0JP[HI芌Knp+> wNW K; ѕflm<+Ac)]]] `ގ!omc+AkmE([|?|1[`/v@;AAv-'t~ϑlGNJf#+D)XOWgHW ^{}a؄#bZ„bcL[w_jp Z&Nӈ,i3]qFm MFU6h=MZ;}vh&̻X#hz}kK!;) Sos i}~G#{`郦''oW8$Z{ F(%mY TۀlOW/mz$7]+, ]!\˻BWV3+ UCt'++DW *vBWWHWv]\JXW RvBWWHWs֩-B3thm;]!JE{:C-H g`Xtp ]ZFZ?w( JqcZ%3tp ]!ZNWRs+-CtMwV!jGAD)s+CaWViIT[X!%1ai:3ehEDYY;F=t%l`ufp5 ]!ZkNWRSҕ$>NыMϤ`Mgy%6_}cTM\{n=Ztld3dp9 "Sk6C{mxdK-TUN5Z2z0px2z'pr}w_ݻwРK0<*\C ~9$pkZ|_2W_XSnl*4 ~}-Ȋ.? t1커,p;V_( WAXy5mtyʻ/6PJOMݤGE d{7l!) Eᙂlp`KY1JK(r2= 𗎬|7 -%`h8/񌒏QPr n]2jX}>wiYTg1:t_.81 ǖzpDl1ϗhmE&\`<-&Syy]t[2cx*e,%9|͔'{c?\5{@gdAlĆ_r e !Z`rWVD(Qq"!3jҊKIyn] $2ӹ6T"WBHψ,i ^iub8MvHZ@ɐGJfs!gՁmŵ͏/3{d,s|Z\B߸8</?"BOsXDbBl"u#=M*gGMϪuXʡݝ`/}N$}IcI^/z( ;4|H3܇g$WL6}fpx*GB?TI nwDG ~}z6/=rE "yf,2'sҎ1 RgMa@lQl<#*\[)c$p E3hBdj4qdw48 m6m|U/oxاХ^/v4*{Sl2{?,[%kt6E^ }X"A sYt È߈,܃+Znyd& C n^u8uAqk9W$2%tBq‰XTGDx!T+5Ҡpi2qoM&n9!ρYUdRbG5YCo FK:'co'1L?[q$ND.(ځ20a4ˍ }UfqErPޑHVZjCsAӚIDŽt Z(%3YKOs:c*HAr.rrα5]Yz@%g?4W Tɿ O b_'Ei1~?Ii4wo+xtUnUY"۪Ӆ17.g(=q1x|uş(Hut˻~iH>Ļ2hy5rUF P9^3&{쪚 ,L*sH{X2Fj}YY6@erSbv=jZEr=ř8}y춌y\D`|5ߖ9/B*ǯFɻf '8+:RB!7*{燀Df֍Y,zGPړD\!ܕQyEi,o׿b4zUѡjEpwӿ֫OӚU:Pv_/Zho"wYd_QK&PU╛_`0"r Z&f;ԻRBqpsb?Bkd=1ve{svwFԤnHO%șҨt^$TLF C8‡Q+fTN=[붴_8[s#Ů"#Tej@ B7O|fU6߼px8DB' Qb2PGFyaϣe+wogӱlbklGm > 9ѸwPő/M_o->|jជ3BPZ0.cTԇJ&T'4ħ (VG PAi:ȣMM&e`)3aA4 A4",g4!#N9'5yN o%9˝*J' j 1nrfc4s_{wU0O˷v1phҶt㞚'N414⓽^=Mwq ݿD=BRH38 5'"  u'`?ڠt #`eF5HOʤ_e9H 0+redJ㹕V)M=$A{]6|y˪6ep/y&'ϕVtKn ov~|s~vGz{ʸ51%rhLr25,c,D&3G{㓾=sLroЌgt\0\aCA#ㄑy̅iN2&s#]2SG| <%hqcrτLIb4gFdHG|GsTܘ\hx&J&M0xF9a0&ު)114 ԭGHM1Yu Jx:\q5tyN!I}s*M֖?|YIK7}DM(n9W 8YRLb'7e?17zOoɛ7ɛ1d$b>}=.IWR!A3{ ^vpK/mx#e Io޲wܲ>֤N@^+.թd}.ybHZO% $R',I!RۆֿvU\Z )ɿg 4}:ד_QJO_6D*Vj@tg+f++B-s_ɝVߙ$@~3%Н ٻ޶-WiPC $ 6 th F\Qr-HѶd˒lR7sqFki:2RRydOjԬKX y-j|SE^_р__~a\;'q58ܣY_w Ey=@WcJBrryP.\ XhSB 7Ɇ`Y&7c* 91h$YoQ+i;~Б,O=PFd(]5%QE`s V=SJ &Ty?3 gSa21.\1D[$ziH&ȈyB省/ęi9c6_ g$Q'Nj.p K Ӹ"`rxDÌΝ-eFK@ܺD\mw!!M+x*,,n0 z23۔x Im-?/G:ӽs:eB,$*bIO~ S:[ t#`aA*I^Ó]nCS` _pt`~=|+>b)&#\}߼zZmunbA. Ȱ()Px/׌ lr tIx1.p#'Ӥ-'FYf@ nݬ^u(|}NDe7qoPPuT6: q"~2%vWӸʭrFAwAwN1m\CZD{XڝW$! Mc)ǛF^<ΘX"'JHaלVl6< jI}c2™sS\&ҩ4cҜVrov{;pnw]ED&nx ᛋ|ѝuZ<]ٗU9q4|9sx q.mq!J:s´m+D{3e4jOW;IW575LOt(S=oXtnԺ""w ~$/gh8.fК`>72 >Cvp83+RE4j M#\-Bӈmi@4rfKv=FNWwK7՝J=|7K𶁮jծ =TB֢+DrOW;HW)&t sF[CWUj Q=]"]qf m]qN(iupj ]!ZnecfOWCWt?1-+5tpmk R dOW;HW* XW״f 2S툒]+%(eEth ]!\ ж%;HW201 POt^ 1˯>op a LK$ڙ zNkdݨ<2ݖ<37>C[`,B? {:=brL|(K Ts*ǓՋɟFt `_nʯ!]<F:{*XƉLl`&nf\5tMA1DguheJ lKC_W|_@"wr=k(t_eDZ XfFLO,u:|FB_H L`p N~vLnxC6-dqV.T{w}\Cju_}_A}vh3-i$@ł,Z859-N ʸjMo)R %HE<󆗵L7zY*`ӊd#Q>)ԽiPem:_\T2|G*ڼ0#[mƔGw)JQp|FۺC9cY J( !QY2ZiQm oVP^W\^:-欩Y^:7w؊/UF4a ˘.QN(Ԫ474 ͳ;;"TKEӅtks…9-m͐(NNӫ]Jӳ}Ϯѳ+萺l^]mlL{α\KD[ Zζ}ܶ0=>JuN [ mZCWʶlc Jt_yH`ǡe|+ЊGA}7l];ժ]O2B5tp9o ]!Zi!{AbP.[DWضǺȶ鉶֧+DX/W*"JGNp)em+D+ȶT{E*ZDWXBT-NWro]$]I͔huM{+m ]!ZbOWHWVl8* j=>Ed<g0b^ +9xcxR`2 |zдZmZѯ\JBm'ZD)hwhҶɍJ؅l Di垮v `[vT])t8<(-% iFKO2Y:wF8l4(fxy\2[U>s*_>|M|iD Bsm Yw  Իg>>컐o@Ջ:tԋCDhE3[ϕI&Z8N\8IDe 4c;s.\g2u3~8Ls!1>1)m%)\Dei˵MiFIyo 4L@,3`}d88(_aK1pL&74a U* KV%LQ/H"=Oue%&ex?Ƌe6FP 2#~&a⊓(_'zgBѓ/'h(ŃCo?Ec'g~/h(PU{帗w q1 .x4mB5jIו,ٯP tPjD -5CP;vhYT s~ UT{羈D /uI)a9*>/K6 '<Z7HMщ{8*`1 KJکuT3!naR q}]P0E szCc2Ҳc|auWt$1+08iteDd?Mb im<}WIKlwd_&Os tg|b׺g0z駩 ZU^axd0 h~`KFx=WK.%%e,LqVȮ4 P(X+IzZwГ6{,z/__~_VZQFFg1U18?q/'Y@  d? ӌõ"wU"Tzq Ԣ_4nnQthp#h! 9$F8yH2d7.=|Χà֗]piXǷƜZYxiv`UNC2&iˆh2@ 6O!>TJ Wq:eF^Y`nS8Zd?N3zs$,Sϭ6yrMfTQ  pʸgJI97je\&{ɸ8'cSəs-4I-3K2a+{cpXsdeIR=J qII9[)rc9?G*h.ƽak|ѩc3OsFo &yk58ߧsGwJ~11.]A_}($Bb)u^}KTڦ|\;c댉 2NqqiSxoS_G]XJKo7-8je}%ϥW̆VW)_o-B(>ɹOL QF8s#\&ҩ4Z f$otV 5Ҕ$3XO(8X#Ք[2Fh:d )2$tl5eFMk0OM'ay5eiZ|T('Rܬvւr967.X5 at9:v ӡ  {vzrG5:{I"2(2Zoktbӈtr-* CVZ1@*P8ӠlVjtyPh1xRAwϻ%wn//2f%jrȉ/ur ɹDTǶL+ɖ/}6,c ^~O?=qdTlzDvzq4ր< j0~U?^*kݨhN8L96W맴?za |Z $hɉBIH sZ~b@1ƹ$ !G)EHd48*8Ő=Iӱ!Q ܪ AuD21pS!C,BYVkul f[U*͎ #fCRm:X!<ӻhrlvmз㾶=3ohc!9@Mĵcve//NǨsg) V;24dKG{ -3@sQ<+^tbw٥:K|2wǭ~oiW6&T8F880j2R^6{8hس +Ár5 1ZT3CRf ^иģeOqul0UEc[$y8yxH9㪉۳߶ xT<翝-r^%o//mXe\@Qb5'6ŬjK[G[D!c).A)<+(ʻގIw%u=僐%̳xQ y* G*P(0$NAmdt @֙ rp Kdu3;J9n9~{?~h@Itr\t{tԥpBttua*tъ;]5LWlϖ U,f2t;Ri;]ug(gz=tқ]~7ЕjOW%^!] rOW 3N:\BWV:cKE ѕv&CW.Of画t(HW+ ?\UG+骣(1+O&)m{c]u4`PUGHWbM<@`ZƆ<@ϰ]0!tW`*,-c;J^KzyP@YƲ_~ͺO'}D7B}&]6IggwKzN=F54n_]rӓw#?wn.AKؙ8mVE̽GYn8{73J=iȷ mRl:suC܋h7+30X'4')G4GJEEB'~´%IzZ|+xjOKb>)} lɑQ秷KFK$GD*8MR &UH ;nN" I_+|W'U>x-}^w'Wj\>c?60t/R -B6 6WUi+n&ht\bzUCP8;\OM1HESBS1#X6˖}L\6 P󦸘zv &آlu[܊42VȍkU I Zt0a`+[c|n'ZZ|dzޞo,ƜK|c`<۸*d`LIqjL AuRE Uc Ǫ̺Z*c6.`ʶԣ#LT -Żg@x@Ag}Sˎ!C*5RcwJ<~Z YTiEh!KE=N1C~w")W]h*XZrh>w:f[#u1|p*Ju4z"iӘ!3̨#mفWI<)KIJ5T>>չx VeFΖco~OGV5%9B6 朒\ 53 <4 caMg7WNX9JTWߪя:֪ci]hl}1 )H^KR Օj9x.N@ NOGsF!S|:dM3EqbC_B#*c&S)2.`.:,hVMUhTc /UKIneJ`Mj/ZV\ e1Oj[:xA^TAUB{gR=95USaq[ b RT .ҧRo&J2:)O:%hn5 *e ǥSK !4k]SIYU֨ ^c'X^Ҹ2R2PQ/f,YֻؗZԃkQaT[9ď <)o)VܸjURbe CYD޿d5LjN6pNJ0&X%+#R5@I=MILg`QXqV՘ (@ c E=!u2?R۬zowIͩ`R2kaioY501:V)KL9y%\8[!&[S^ O]Н`UgGܨ Ӥc$TuչTb ʾ*L|Ρ8T0;c*GAΦ7INd0%| ?mZR}V`W[*)aZ~$F0%iAw2:$YFroˀ4!\`cQwR, g}2]1J< eC?cBZq=L!c)ٕ!`Tg "zOØ:9k̼~*}uf!$/ܥ<? #R@!,*(UAba۳2Lʔ34 kE~\)z t < _=}z=ƥ]+O2N a:yDL5vEJ;0p~zm}J*\)*Cz<.`#K0xS`|J@pp4\Xw9eW+r ]m%njHyJ#w&bcFgǸHLc!HLi#c C\zi€}t^~τt AS.6 qg <[.5E85!ǪOnB~>**]tS0l'd00i` 3bK>^_Dzͻur\s_n S'[k;=# a0zKm!#8k$!*w!rˌs @ !X "U"n-#9'au JFP),A[1cDX?O} 3s9QYDdSq' q'{!Ն0t 3c N6J4!?Gx0F wGzq4"F̃YðRz?!}11zvdN u )eegMw' C  `<.2 WOͱ:fV3KK`el@4)!d$=Bν(CIX< dnby0BXRJr.|f!TCCK o?5q}KOy/rOݯHN~0TޏeWo!3 1[=Dj?y !7!Cv>G r{,kt{I/WtsuSC?z|~wwÿ"hV>RVx;?_cS~ m{3~ m<~waHnz[S/_*`'Bg9 $GNjC' E'PGȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 :Re q%PmP]6'@z dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@ڬ:L $;Dk{kھ;DepڠRr1Ȝ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 ]'P59 =N ȭYw;9B9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@2'9 dN sȜ@qw7Rj*>O{~7~ $&'c\1.E__{4>q1.gv}p7-\_WKFWVY.R ^E*ƶp*},!fE`I D.E-ZǕl (HD0g5i-:2Yvp stjp%rJԖ:De ʑ jp%ri}r\T WU̮8W"8MjDmɠwE\! W+olWP[riWPYS0\mWH ijrH\A. jo>96B2R>ge<~{1c^vT{}hQ.]?oٽO;qiKGR(ʧv9 uΆn{w+%7xP|x &(~o>Uono?~oi~~Ce4|Ɣ7j !? }=ursCߣW'spT@ JΒ29drr~ko/WIyy&-b&W W"yީxk=Q׃̮NbYhjຶoܷ-\-SK+/S3ʆ/z2E\\JjWpA\QT"\Ap V+J:De *P9+•.U tݛxDJTZ2I\qM"\'\rE D-5+QɆ-*|R+z+\ZTN1\ 5%#MjDZG.R\1\mWqQ+!W65]ʜ W[֜ /#_ )]]fTo&STiى׶JPatyfח =͖ kLnY"q{՛*/z ҵH[\v.\hEZp%jClWj"\)ũb• Q DZTggz\5%=ȥWk WSpazer jOlW2XtE\agJW= ƵԾ#M {ғ \V+QDe *NQV;gEzp%j3+QYj*DX;_/\r.N߇݁֯X԰xnEJGYughLxk¢^.ޚu+{89֪(eDtͮ)Q a\Eb4mDepW]_/sI;h qc$ɭ+jhc$pU}t-)H/^ t9xDmͭ *+&\QdDW"wej_ *}kW!R"\ApU]Zpb~uWjXSp%5ɠ u\j< WoR+JRՂ+QǕpA\%5]Aoz+\­ *cqkekQ+ȍ^Mt%jJTvMpU]>.\Dq䓆aZ"-]d{v[t9$M ID.)ىlf_ %"gt܅+ʯ ί/զ5*۪7Uǟ5\}I$ =G5ju\A%yqE% WIDg .VUl Bu"\Ap^ D.-ZǕ]mW\zMkpk:D%WqkkKTpU Dnf-y\AkRe{ .%5{W66 L ȑIADW"y3Jl&qE\Zwr}~Ep#@\Ϳ݁ڼ{6 kn6Sqߵ+;?\߳KKt첰gb=bEOu)Y*8=O"7y'jkESPYL3_8Rjv"ٯZoڰ2+Wp]}(+הJ䖬W޳u\@ |.)רW"7-"_k>6@&\`E FJԶ ʘ WӛUw>\Ap^ D),R\jW W[U+•WvQK]ʦ6\R"\ApW"WϭJTj rE`Jzp%r+ GWj*n> _v.\qs!r@) NQo@m~m)-*r-R/IӖ7\AQ*EeدW̮ ɇe/D}S:WԦ얩,\KsT WllܸvtHjT&g (%R+]-[X }l"0]\JV5 q%* WJR+.E89\A-qEqigr+Ic5+1\ R*i•NN DnV Bm]jJJ *g_)xV+e͠GWrxΆ P GisåK`.QNU/hwttT}a,jXucp-v| ,Y ֳ*/Yp\iu6FoF0 tfGj29K^K&'jɉT-`&WsJG=hm *ɛUxfׇ W<7eW2kLmY)Z2^ /gJxH_} $pgu`¢O0ER3ER49z +?!uQ~2*ĩD)܏ckճ+ + "q2*{5cWWʥz9jL=!udU"WSQW@-U6%+O^mFuȥ'Z*1n?,֥JԲ'*m!CE]IDdϑ.uȥ'33=T*QH^RHINH]3ȥ'33}:WWJѨ =; 8%uHZL$˜ow~s̥ "%.wYITn; }Yulʭ] ѹv[ y%iG;4cw1k~#L)L%}`58 [3hnB0GYhi.ձ_ʃIޏ6$+'٩U&sLJ$% 1?~K10)ۙUN߻>lzݯMpu$,I1㩟Et^Oy",s6oƍnkNg:A%QPk 2KA; ?A)ʩ { e特It>FRMr7̯X _ Crc&y$8FҎY | \KLPqyJRoLNb4.^}t~hx10PQ04d/W-,o`7uGEɣ*eWz2]bκOt]&|ۭrMN;Z{="-ͧSw7}eF༽eXVZ40:&Ϧ9b# Q屣D`a&zzHQ#̀ 9C×x7)Y.03X^G*#R"%sˤ"9̰4N+!$@$N < +%C㞇~_v< zFXWu7EЬL ,>fD!JOVHFrIt k '`t  ۟4#H(Ee8$0!LۡV*H\D1o^A*z]vfw QƈxfR*C i[d5\EÌ9i!7%|yy0vbV /d_yd-)ed]N||Hѽnn2 "b^t/G^[oMV͎nAg?FOy1PTyB]Q`I,:3}} 7e j|9lOc:J ?0it}nglꚤ2:V,],+elvY$٭bev*H,ƃ=7 N[lx& ɕJZo`)8Z%IfhJ{T^Q2˛or$P{\5~ٵ,w ަVJ/# ӀVq.y8d|N<CYt{4C^(|/$ɒܙ0X*9>6p0VBQ&晁6ka<>}iM{_߮6XОa~;´'E0hz%C@1K =RUBy}5BR ,ǭ6n#ݽ\>\ v`)+ԚL VOq|Ӛ?5Y=lb1-qGK*Lo0*A50+HXv0#VTھ֥tvvEBy,+yi#+h,8ES(M<h ÈF%(ؠ!ii`$ %6շhbM(0 .66Z4wDE6j/c-^ + Ja1*0086**b)7{rIP6r+ 4q 0I&c:GLhH: s\SNeFL 52 nMBsܱ>F`,iED# c  k&HtSJgjצ;A-; -|is i)=0J,Se{E.a4=aw mPӞ=1ƫxU?eqs_dE6g엵 g۵َ#|y-CPp5Czz7gq9,K0*DwTn2h~K%`B!K˓AEgc.kPiheݓsp}6?oM{^ͳg{,يf\u-] OEp0.ګۋp.f=ؾ>#`[ww Ⴙ.Wo"wss6g%PV Hj/S@@*aLD<@|DeyEf_\hop62W6NNnk|5}F'#.1[)t>_9,nږzz\KN 64q"7=x/a^_Ԁ9-r͙A6ƒE!tϏۖiCXٽ eVqBRh"؉ȃf"OD {"6pJaB)7j<[֑aYqGkZ Hj^ Z9w/]BG~ނz]wÑvss#cb#5@ӝhhfK86@GaTǶn2(G϶"g4`r -%& ߂A's l "O韨F\3|. Fkq !JS%DջKp3"KI S"!&"ŜDðquȹN }uc`G'pX@l#sQ(6BbG "SvLtڛ2˗=msy5+(vem:j Q;J[lϧ )RPB098Z+b 0eҪHOҪ@hy)*@E*DĐu2$ 9D LZC'!Ie #52CP H"  _mP/*%',ND ""G#% 4 Ե5ԋMlF/YL t!,w٫담,S`8;H7F￟?SY a^Y![jwz/%^ef—Yu˹`-Y>oɬ}k׶n% ?I+d،Ίɸb[fRv 7ZN\~+6WHh  ^O"g^"ir`I1|Rvʤ \,Ak-1>6'^䊥u/j&OԽUa'5y1AC)fxNt\ɔX#rSu %i:09w;"QYc3\Xϼmyo3ui9Ͼ__?{W[Ǎd09, <2 %C3{x%Krⶾ[W=$Nyㅬn&G-^# iCț)7lhcb(-^a1gI7DW֒1q3t5:hwJ"t YvmGV hR9P&NJWFO[ k|Z7 ]MnD^ҕ8bm&\BW{u5QFUWdCtA>,O_> (ѥHW%/iCtІ֮%Zatwo+]6] ^OH|:ƍݦ|;0r$n>^ |_ y ̍O1(~zMw_jv~pO[9fNͧĘqA\bzVՓNGu=a;5쁧>1/ޘѮ]N̯&]BS s盃߽O{?ƨkrsOǿ~YqIG#?C*|({:_w. j)nٙvafrg$v) Ur٤h3&({\hK[v'\=оDC/oXu(/LWeQhK|(EW=$K 8fhw(Qzte= C܎a+t5(אּ)]rش%`ǴpBW{(<D'+&m&\BW}GWHWW_XQ%o&\6[PZtG%n8 ]MYhy[WHW!:c†jН 7В}ҩzt5Oβf/ rHthӁG{vb3av Xfhz 0Ѧ_J6Iit ؾ]@DYpy3rM{f7OvU~5g^I_=קy}s㓓b0CZᨮ+H(Kg%õ}oQQ ysۈy{sLwO/.K~g&54ˆś<=w_~XvoOpP*WG/W_t'Sz(dL''' jw\UkOŏ#, 曏_gґe;*baG}*j [_Oeag5:/}yycf{#' 39*Գ8bYef ;S:z>f㲳lo>|>Uwr}B{yo?yطW#S5&-q`Jl y.e#I5Lmz`3\OJH3-@B9Anb%RFק~ƎRgXF wBkk0 a&J-K`l0B!*ڜbrdld&{j%Ra\iQʝ,RFѢkKc}j)XCd7;2+PR'Ĺ3RœDh9{nt8hR}B߸R@1qjzL-3 ф`~BKCQ@=hCFpa>&8dl4Xj@cVmFM#dmR ߚ  <D^7I&Im>_;ɵdC̕l)SAd*Y|,S-$!wysYU)^{r}ϙTR35a?{HT5Tcg@1RkaM$)w9$5nj9o3Z|1 6 RjvH9ЗI tߺK -Ƌ`1ȓ[5/VWlZȑ*wr.zyR%cPG"XXgаH! s'# L|Y[(P![}b0`9g0أYԡC[BZ 4Bs8?Ҁ/+(qa0P&PyUŷٕA[pl5K%;ɺ*LŅĕ.( nhJ}/+*u>%onzdF)f>sAEۊF m {Z2@pֺ);O-`R4g\s+0+w!6\m`6(<| % v? \n]EzjPB] a@B!ՠPw^KCw(c)(b DaHku@ -eΝ<dT fR r 6\_ Jo7XQ;E4GRF[x՞e37$D)J6􅲎ZA!N ɮ8KCTݫ. RSxQf^$35ΐ_.}p@uQ" %k͐Ȳ(g ҰM6}EՊ{BufA=aP5ڛ^,KL+fcEUN=: !pBK_62Àt&^8'/.\@(E&rZcd^[1P>DZX33]Zc.%f0A 9͐sHw 3xhҗ *YȩՏoMG}^ y' mG5jee2 v"}M ~Fy'k7YB,uNbܒ9^2,LHcc.)d ŪuԆHT&Qw}TpGQܐa6(Xjq[S-ChW1S;6@D z1 bZ]S$jFH+(,yBΈ`\l#\cwa,EJ3-3jBi5eHTyPZG(o<*"bj,},*Wa1&@g0H!(vDMqPF/h,U{٧&jdg Ԁʬ)iJP)ƃdzkJBX iorC6H@C}>EG3Kp03o IMfjhɾ, Wopg]/`|rM{q|vo8$<90tt!eO`3=8ؗ ΦAŨѭy5DvԌ<R 2j31dhH}Ӫ2#lS8pPaRDH'-ҐQUr#a":U'%\]>ƊLv R K HςO(ZR|[7bX&*`r?y"x)1n!_,; R1TDyða‘sw(eQEq,f1*11d9T],k~xkSژbkcc2=\-ڼ)VzPi@H{&=Pf 1jGj='е+9rzGAц%!5OsU!l-(f4[)_~ c%;Wj} rtpI擓_pvU|9&wL.!.zJ̧Ap7-u(ټQelU,r+{K\ 1 L7rO}0Lt7/zVNvǪӦz0]&74|}q>op/Koխnߦmo j9ov7yzw(ܛR<_ٗ7镒PE?;ҕJǫw xΣljv~ a;tqwWJB팬2o\+QWCa@+*U/JMSfЊӠuliUʢ+*eNNRݧiM>m+Ҕ3P$Me!1|^Gy*#o=ARZx~?fһ ˙j0 zh]8Q^Q ЕKiK*MjFW뀋 Lu@k1FW+.v]elCғEW84EWAĮ+azxt[HWl>$}evEbRZ IW#ԕ"8NR0j"` i%D elcr0Vx!˳(Ɲg|!T\vmt_J:6 *7;E> JdvGф4(C``e3fho&GJiڝ]p= 4H뤈]WD !Uf҇ Bl=fj?r3 Z%qt5REL*$]4e0{FB`]!VA"JcF+N+FZEWDM"F+-XJ>+Uֺu"XyFWٴ]vEv5J]Y)ttK6R{*V"Z}Qt5B]9tAi"\eh^NוZNj5g *6 kWDeuwL)ةM4Ή']H(&GiVI:7n4Aⵒ^ F6G^_~ n8h呆t%u97&I/銀d+,W:QF`s pB3[FW{Pij0J/Mu6 q4\tEG?҉t5B]i<8F"`F h!z]IW#ԕ"HW-]-]-uEIWԕZSZ6 JbQ38F]9@rD`%W]v|ƣ+w; KȄ==1"᱃LVnJdXF|f~f\o8nZPςRR65R6K)fד!o#NN 661DҠ1Fr#i,B \`ӭG*1DӠGWI/'x5>AJ{0\8iM ծ]ɤ&HWa+5& N"࣏$jҩJ9HW4]n\tpa:j9a銀EWH!QF^xL]!.6+U.v]IW#ԕN ۣϠ+]R(u"hF_ +µ6utu? 0hLhw#M#ў ױ%}d5լp ش^WDR+fD{D{uaцc-5REV&=À 6"\-hm]WDLu pltEhM"JF+-Gt \tVuEJ$]PWkTB`$]a (C]QWV:0t*]!VuE6jr2 tEOʁBskj0Jt5F]y#]!p׳7 ^WDTu@l4?Rd 05r٠VWtV]nݗ{a7k}%ݍi> J`S"ٽgH*{ie>07"9M$GM;9цM!)~"9uͤW4{apna-J%]4X`+=aߺBJ-Bu ltI>BZRg%$]GW2Z3[!p+&v]IW#ԕZ:HW GW]Q(c1^tel== 8(6B\#Ԯ6vԩv5F]Y#F"`F= Mh]WDncIW+|OivENz6BZ#oj'JjWcԕWzM1͠ߑ 05r r AXoA3ྍ B24kFӄkM^DR(5m%V)7vy^7'}2&,/yso/xr|iGb]>Sr];r9_c |.e[C P74yߛ+Ûcw)>X er>̹(6񏿗zvq?2س寔?jswʧndW.k&Sn@=HXMnf~NrMykw_=2,_NONdJ}e.k-sվPDTʉ)QlzU$d v9;C/B@&'<'"BAmuZJ.pAiPԶpAP Zn>R ڛA~ӻ#Ptilue:)['_ UJkV@YW +-J6P=Kv"7(`Lw-f+T ^nݾ0'ѫxyYvR'e:%f 9YȞo<UǺX,wΗq&2nw^fyӜg~^4O?e+>GzݱmSerI/]vjTg^Yl3Q+ݹg9xՅܞ;yW^7O_҇~oY`pu^TVcԡBoIp:l̰ta YE즏|=;meϲ*TݺOŢΊ|fS;Qb-[Le*[jLt|LZ|H[,.Vt~5wyZcm{l/YWl#.QŬ}ofoc-  v;X!ˋ ;|2wgھ*/M]vSU? ~桂pS:]cr{h H0>}]}Xq{I~8U}az9̫5%{Ӕ嫬n-g5FoN@XڇH4{\O'apmc)Vlj;N OJ79[!;Qȝw$eΓƗr Unu{ѺC}0y"/6ڛ) W#iլ7g }Q7'Xw/]sꚽE)novϝ^ě|WZʇ)[Tmi5@(jU;%V+0U0֡m;mhd_꽔e`q תt]kor+ ⱻ_7 .d #%d{S=("%>F8kdCӧs:!=h;o|óyYYRknB>2ETtaO/bS4c嗫%[5_,IEnQH.۠ˬ6]&j]f*Fy$.zsщԑU ث=̞I۸D"6jz+ ju\eL߮.i,QӦǻ=ll^a-V) }S'Zvݓ]^]hm5> cM> 7e =~#eowS%G9#W]oJu= f5)Z)*$mĘ(sYDZEjFk=7 6#n:˞evD> 0p#rM_c^զOB{lr6}Q$F >HS̜FSM 92a"5ON9t9jY\2UVX!'nrdC%d$ͷz^,4NŹly\4OeѰ,̝X7]}"$:~]U6,I ?FZwd ;VBkݑg=|7%DP9X.ܻĬ"6i=\=;ϥʣKWz/SmZ3;3gxPOî&j+c,^ Y2be8qOZ ũ[)UpU=Nɟ?]/%^d3l1  ɏxK^|v.@m ԋ*e4sD{BN9iō,GcK={mU=?8;%b=~sHI!#褢548I$<nhϣ72>Iwm绎rd&dmN!~l0+},ܣtyT:.K9Ub2[BC&29*3V4ӊ&iY{6M,6Mp$d;sRbFOs.sI%bMs6W3`I. |$^ؚ.UlLg5mR ]-^~A`ɵ\ubuSMYJ@W}*SH!* ]So -ǶUm|{:Ct #<]Wy]\ Jd=]!]!vI]a՝WwF]Zx骠䢧3+ ̂]Iݙ*pBW Ji{:CR@Jd'g7BWX骠Yҕ˰KsWNnI\ՙ`Ak[2H(jOWCWsPCtUE*pUgUAktE(SgIWB?e .P]Ȃ04ac\ۗrFt 0W34]4vhZm S q hGjR=] +3*pA "BĶtuBim:DW؜9l7>Sh'tUP ҕ@KtEM[NWe]+R-L\\JKۂ]+ݙ*p-j=]J{:uE|1@sH)J55߮Rbb.\.OY@10]SP}LE8EWȚ>BY8[~Et eK == >6 7ݕ no0]OTV1BU %_}]m~H ؿ_Mr.sߝoqi*1ӏ]cXyX=M7Kų̉{0-}P|hl}Fkz;|lva٢:"*D$({jِ "޸GݬW)_h{4jquM]ϲѺsL$U1}xnU/Kǥ69&ʈ$DAyJ0 Aɬ S|_A֋س[A;N&Թx f**5͸.:ųdׄ5I.x%8&\ ! J*\{D-#$X^U! sQ%yygAFYT.{ j;0*$TH l}e1 bD2űp 6jM[I]f)*Cmy~qI:%E(7x6H$| 呂 rY\mDKUkDq+5$զXY}Y>ɒ0g2HIIɤcZI -Ab"^df̜+h&ePFs ,w)iG; ]B|ZLqC!j ha=g |MeWD2k2E;gj*bFM_PN!QCSVIbF`vY93)zVQ PhDZfqOҾv/vBZ.23ڸ:vI)=on%h|Fţ .e-%ϱD]IpYְ\g@x)%k`Af]gU;\L28(ǩ[S郞9$64\[$X| V.EH"j΢@][nyczl2QEDF BY@:D2Sig0 BGQ:'1ZQ1F U@5 2.S(]g.C~$ΜٻHWUA`A@$zm!\,=AY. H䒳n:NUBф i)ёO\,EA)S3PQ.m>+h-aoMvӊ#:CIٰhD(QTzsPyUoܔ"Hb:c(mIk9 YeV\hZh莬NG_@fjhJ}/+ʷݜ\lz̆M(cAE Fڄٻޠ(T6)ZOS KN-`NYe;ՠR5rwU #" +lBN?I:- d>QAQiS}W:(M*ӫl" Db$Aa6x_ZUb!z_P4;|p7ʠGOXP+f%LNcEUJ=Ik!pBK.}?ywݞ5 G/ &[ŨNSf G 1=*tT җАW%sJٷ)f B]e \ìc,O; >;ZY:u4kߵ昴j9S֔4Zex31ry{%з^Tf)t5x7 JxK̀= úSi*tyHչvAY0mLv0 )T0K;hJy3`\߲al+4O'`żb0xMR4'w I#Ga_t7*oV1 /)~Ca ۢ R،JS<RuI` 4B)1QFǤz ڊ[Vjc-:AU |t~g҃d &Sp VRX]ے>ݲjˮAũM&&K_ZS],Z?Y;8mP Z4XAҢ5M-m.4)+B@S10>Q8I/= Z6XOQp &nI2[1itlp1:hK2a4&ZT̎EЬtԊ#)OS,]zR@õ1Z 4&Tyg'~bQ!wԢ`|J 0H5zFoO{qV=UO^=9DI}9?>9=Qjo @1 A󛧼ϼ7ARޒ_ZpOc>~?oI?߿m…?2A4ßZ'_!]_J:l<_³=^qg0[ wz8KES.$?/HU$PI%L& 6Y[F@La>kX@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $ImWI vI1\ 3FWbQ0d#IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$"+;R' a$'HhI uP$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I Im. a$P0IU& L1@$ $P ?.I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$f@ZoͭejO}^ߏ7?nka޾A$yvRׅaK@K@\?]887 ]10LΒ?_;]1Jㄮ6HWAYG4]pzRtiGВ_@n ]E>[&g+uqu (BW[+k$_]s[u]iC |.P ޠP߾q9~w{ݿy[6GWJ.l?| 1Z~`@W;Z[pSyg[|vWG+wZF 7utþkTUW~2L =%m^|U}4?cҒ/*xtU J}T]&IeO9}ms<>4,)w쥲Qvn~Q~G^pI"I~˟BZjg5d %owwov3w騗O(҇6ڿ+_>뻫[j/_B95^^Ё2\Fq(<:Q4<ÅΨ؟u<ќf*\0՗vѹ4]p|K&G+Jjt: ҕ1&Z5]gZ CW|PvZmNWhEj;teRb)CWk0ўk<> ]mȤ@tETÍzZQj ]9k-3 ]1\ _jgvI6(+v+>7 fե (BW@H g0f՛AFD]m"[5mu{;/kJ_hzg<7ap7aވ2Zz( Mo$@k èwŭ5BWxlLW33ڧjʵ8՗v^@t+vV CW (thu%**Hopg0 ]mԫ+FiFVpҍqWNs'E@4 Όcf͎Ѯ}3f 9`~&ʴW F=~VK^kGtpn֞D+^@tŀ7avb6HWV{q b CW yhk+FW BWCWRn bCW 7Q hao}astt\1e(UؼKy=I.iY^SK/HZ%Bt3;Ylq(wN1Ze-'BLpy隉6f<>wtgЕҮפRѕ&tUvb+;Yuʐ&DW ܇ẗ́{CM CMSBWBW>+{f_{it( ]mR#]S#A;dNe$ ҕsVG5]vbF+FVOW2Z ҕw.=]yvң:+F*Hٻ6r,W nJ/Fz3IfNik"KJcN[%dKXXG{$KĮZ+/OQW3E֔Kb.ݢݒ飊ӣQә\y42Si @רԘ 3y< hpbчL9M1|9n8Qu;S;ۑxVjDS QI1JPWvz8ǤrZŎF]erѨ+=BuE5T?yNU2}*S)I]})CPy)9O=~Pϊ:_ȋgvЩr/~]7ze;3)wI~9w!Nɷ#y_أz܀Q7 >8/BƋri&=E}h=j!<0?=QEW:q_W꼟HewPPƶpfe[4uֵ]ЧswSx}& O1^q^i4#! l! È^_yLG 0GoTC cGmͽT2z4Pˉ9to X`-JZB^U"[ҧnۤ34#1}gy-m~V]wBG0#NZH+s}>5kp}b}3U{nerE7ߛ? qv/P1؛}zPv~i3;PTuׁ2POɠ~5qVvP& :HV'-Gb^av{RŊ*gDv.A̪0ѹBJeD!^~x #| K<8`9+ 2L b ;R Z(Y&fOLNf<.q>_vz.鯵:)5JGA Q;2G S;05ZժrVa9CChcwJ ?;_r6+~wPY~n5U7]Mo0] ]Q*TrZv1W UnQzNL/o -0Ƥ5sBڲe/]SJlŮA{Tl|:A B u3 WJY6@[\G|7S{I87OhKv3bפ>m$%eɿ珶G2Z;F -cs1=# cQW~TW{S~_Q5݋X׬[҂z`*a{n.GPg/Ugԟh6.ѠKeQ  's&9gF: \9G:rE3 S1*KUfU0%묀^?+^>-l9+> l/UV!I.O+ WJ1glw|&mof/ksf.3_vޙ{rgº^Z|y^F*'!C)Y.[.p{0 ?e){^ ?ŬMqʥNFSܾ+2>ߚUnR]bhH>2c<(-i)%P9 +eJ)821EX:τB)bc$3ap6;v(λeH 0n({Q8[I ޣ\R<#D#6q&)e)\/ #$)c/Ĩ^sVK -x4Ub[)j ':YO:M1<5] XSl(댸Wʝb 9,_E4yN8XʂP78'8C9#xˮCDɼs';$wgwZ{藣W?C[DR xfR|pB۾ۍן?G>])$;,gUls;,./69_%} @(f|VAPЊs|^`2cQHTkB<.hM,ygLXu5hMt0 Q5VCGRj6sx;).)\aڿ2j<˼hrMZ#Y6e25!£[lqѽ񄎞qMj ,RDrGıȆt2Jn;},$SKW>J; |d\y2Weq]^e'pVR:ޭwtc-&.Ա %gsH<&F 4_Pq*FRiEkk6kyWMQ ym;%عn(_iɝ'X`عvg`6 ̭eE]fʏˏ=ʛB7FFC4:᭶&9ThR /!H&X|9rF[Һ2#.d=3k}5pr'쉜2ɕ_:tyYԺZ2} 69hI1⠳ Q j;O9W'!j/kkདྷfO킸 }/Z(nmn zOpv"%UCke Ë>D :9gs?~R/g~r :LFlY?{7ੳc3JpI/ڛBJ;ͯ~̯h)~yon~_}JІ$=C98 9 BF9Aa[lxa6yak<1)" M=ul$JkJo~!՛B_ķ{lxJ&j#X9CQϭ^vxRwm&7k%P{pZv&v(qy'Xo qrQ|:4pcoG4QZa4N J$SP?p)gVLB3ccIAĉL`W8bO1Zks ~ Dގ9i;m;&pTgFw3,cc }ț؁Ov 6`Gvxc7=HaO#ÌB4Ls){cqC^Nz(wH)9{k9\V )䈤41irLfKmFJR.%OݎkOWWB eRQ#f"Q60FpscK0L RrFvҶ84K8,[.:2Sy{A?dxw,\ON9]6;]` C6!{JX;|,dU1.UNi* cV%JR頨IN`k6?|T'i29 h\> cCBq+#~m:S=$J7K'k|Hm|@ٓ+{lJy${< k) Ke8@cD,26&h˓5S*Hl}T*[-䜉OBP r"PB> vSf-PSi0FQ0MR3=У9IGTJ5,gUr fbT}(k`|1g*ViKj^`aPrr)09JN"Ф\})?Q4Q˘sTQE)Nʨ6" DPZ~=I'?ytAr%'-AxR%ў8tH&LŪs3 QP.NqPeO x3m *$xI<ܥwg)撦5 ty(*|a.u@<#.Q^P@w&IQ ',Ѡ?'q#YbP"  lb}z%Eqj$K,S4LRd49+`xw,јjaPcsct#/)| (5'6@&'̶'q?ʂ疦RSJ Ρ69.SHo%6e"-\P> +YxֈDQk.L)D:υfB)jcAaw ޱjx!2aNd()iTڔ8[GO.$ ݲtG{γ% a;ui"XidhhL@( ,H.y枮C?m7zL)m.z oU# !fȧ7IpϵZPq/cJPi5cF GV0k'T&u^T`hb,0!JYFRN:VV]߷ZSV $YhCE`*Me:gI6g;T֐5]b~*&@@*ńe`q {H)qSx.į1!ɷTo<שNN5wC j+.L".Keս }$Wo \mGw`BJ/L']}?PL y~hG"TFTԯ ŇxWKuK 5t|sXjk|j7Q߆:_ sQ5+}mu&aDu?smeۚ588z_iY!_jh}-6uj*٠jjE wհ56?TOY@+xqp982]"]v1'CFk2{8Ym{&+PjSp~'G썬/ f,-Foc1˽Qm QLlv¥|V@>s\ ыDTY z"IQ>X$2 -.}moێ0aqܩLũ 2Pz-5 }1(#{Q`ȋqFGjؘN|"DO>-b^Q*qT C~R%*|Ӊ&)H%ˤ) A ƪT&ON|I``XiP<yț5c4ocC 2N] KDt0FZMXAqʔRᙱh5F\2@G/R#IAItùiMQf 5\"Ӊą!蠵^Z6l \iqX5^p蔂F12)4&(0\p5"hXa,7wmc} ('ZIbɪ B  #\'fb̄4,l_#/./'u+)9'r8L;T`NNd0r2%óƜJp-Kg)A%?ÒPrT-ZHn>C0{ꐶ"72m֡J}%ٷ2 8AQJF8UۏP[즟Uh_*}[Vj_wo-:{Cr7aQ]c[4#5^hýТh`'k/h]_ώM6ALWfD&Mx4RLPz"+P%Z3+a`Բ?Uq]/^#=;klGӣW2h}ܶ9VlsU=Enu+c1{{ lQս?,?VyfX0_Z+B⤣6,Fտv0ˀraw&{öKqhy3wIɳDbZ3[JJOшbڗ`ZA\6Yb6-/Hۜa@ǐ󧉷G"5 c M1&%jtNՒ)ITA҇*&h`ĠDPtgBL l"0Q,PY9ft112Ε- 8¡z1MF{yY{_m&?DSPEuDw,0I\z4[0%,OF$%zGlhc]{m, ř/y"I Kt8P˹߶MN9vͷYSa}fln@,~U9 &M7:I[H+Jc!2F m+i7C,iu:A'T8(t@[N^8q LI4dN' 3sU+PH 1jM؜(-l %_5&N\h<*.k80oRD 阤1orEtfW 0ͫ_S Z"W:wz_5Na̓BQ}KQdc g]A j~/ħ]AQb4dz򫯊;}qr/geIΊٱ1b 8~q$( 6{wE7xz鵝;{ VOû>$}9!'@Q޼c+yvKb=r&iE#y a'ٹdOݠpR=2CwY޿yPZi)QЍ#Խ;^Q~a@/fW?~1Yg}3(M䋨(;']KťxDSQ#"КX$XKy<=7̈́!GPh/|8z)Yb |/*Y4dȒ9G4P_~gg?~Lgn{_q5Y݋Fmr y(803{7J<>ljgX^d2xB# Aiɢ\Q) $HjsXLJ d<Q=pA>sK񜂉Fm.)k&AHK#sL/9ͫZ#BiNgoǜf*鄇hU6M"Pe Е;ձRϽI=P,xp u#sghg4O"qu4Dɽs9#h^ĈvNk֜l}2R~v{K&B(vu5K!.MʊJc.(]Զth&>lbղkK<fȬY 27!:94olm\;8O#2Ui=VҞ-U1%hۄcN\ѧtfcs'>\z>en65{tN)Z/0зBBmжPh[(-ږqk B[ci BBmжPh[(- msS*6d S)]:BYVPS(TT5bFL[#ֈikĴ5bZ% %9w{HqR#颲m?Bq ԠKt. k"߄; $Rr] ؐnS @@kiͿW[$n|\Z#`dbaGK=cƉWqA9 ֻ'I F eXQZXZF OL3eˣSQXŠab3t FsJJM$A@?FLѦS Rpc)P>pcpE>>g'}6ϗKNmn1÷@UfTi8k[U BTE A ȒiKĕ:V?āiqIG3Ə&-BN &J$'cWHh\8scc IyL*$ <#:j>1ϔT 6&Ӊ17ң/^V'jlG*l̶%#Q[s #dt5Fӛo߾)&ޠ.pzBnB߬ q3&?\@PØs]~Ÿ 5u'ɯR|KVqx,݈h6Lp핾ߝ=U$6_ 1jQL\ڤ."nPudv~G"W劸F}sfTuCtYoǷ6UQ zh0<;sPn=:~%2ǯBH[T5ɽ{[O]뺗͉=mw‹6KosH LJ{h ::h[sJRͅ`Ք$;O[EYx&A"1h 1'r*|ԖH{f۞'@lO†tb)92|-XdKb@z| $-}x{'/ü7_ 1Lhޟbo}N%4|=VZ&,e:[|Pܒ$ZLp[ڿ[);XJpnPӐ1ȶ"t`1e!LG)(ݘ, f6[: -V@AE-6"pk& ۗ+A/Z}I圇%}lڭ.z-slæod-J!34jFH#=-[$mp 9AS׼<ۥ~/!Dt(?H}kbR`U$(c$b!FbqO8qG~/."WYVf/(7ԥ8([2C-2[G,ڵųuxsL(QmGs+:2,"<Ƹ#aREb/"D4Vm,/?_!w|x(1?ʋľ}Qm+.XYT;M`¹|ntN兜J%Q]ҼKwI.i%ͻy4]ҼKwI.i `)y4]ҼKwI.i%ͻyTTjbK8y#j am,,Wod0V2+ry|#1o:\}V,٘Z5iڢ$% S|vSrљd}CX6`[mGh& Kx+ m  ]^a?wj3YBnPk%믙&Q[MCW/VP;LA0-5GH]Iv?1ñZJ7}G%JT<̵$BFrQfe׏6ƪFsRꭔ7xEkTWU9cyH<1E[|doQ]*cGtg`6ucCXEj |Q[L%w:c9KDl!~rVJ;h5̄ Pӱ7oO* 'gWɺȣUҀ6l1@M 3ˠ?lG_ lUeynxȭY`1ڳ.\5j)u|wŵ})W&h5Pry؝˫7 MXO<͒,86X)ߓ||%?kl`f@ZzxX7B4it‰Mz e4-';S~HPi%H€. $L o{7?_c>|?p<}{ݣxcMxL+i0}X]_T`B<UX-zg՘IDk1hFs+%ҕmn;\)2qfʋ)` Jv`+ 4H3Av4OHaזU^K{8t2n[ UmEAϼ;ۗws+Ya_]U_ yɗ!? 9PrT}_܊4lu?gϵ@W}^e\Cb;J%&X >W;:P`n73iE"gh#24Uvʊxb6e+C8+RJ(jbDe,ziefPt #bb#wVTWKew&UxjG樎D1r68FTPfc.5J p"-D*rjmG rH# be}0z)#"b1h#2&"]sӸږS1ЩWAn)?GNl$pG)QXKPT@2p5EDi2Y2X V4:d',''J |,k'Ę>,8>gYm8 ~rU8|O>ٴ¡xvw\]^wt]($PAXC̦)b@Ͱ6c vGCOJt늨Q"B= d2J\zȁBQz KUß:tfwBI ( e5BI޳q$W>cw{56 0)1HCJVH4#ĉͣ]ՅYf4l56]A&[A/\_nbS jRABN8)&K )vp%F[恍9 و g 6ƹj+*ȸ By%3~ QXdsB/ZY iDhYʎDF4QAJpRQ6Pd4mNM>Wf9N.vj*}^;Ĩd`#h D{,㈫<= ܱ@,pZ5K?NDZ@JKb`iƜc){33lIT1qo1ZA}bJYc%N`UR-Ǒ:d (X0 FQm5F]V-:jZK00A"gAE)u3Ii9P ,:bĎۚӐמBg|cO4|T)(-`ARI|Q8m(Rk9Z{أA<1 y,wȆC,)VgF۫o~HUv QrC"%,dJ6Q KxɢF&ӄ2LǾ–Bo-Ps2|r )b(w (,rNg:A%QPk 2KA; qORSͦ#ap) Ё$!LTaL<"NJw,#=&'CT`|cDC`xf$f@BY OQ0p/xΐ#׌VO*ɳ\A{燾rc8qNo{Y~6 }$u ӻ٤8wt8K}釱Oaʊ`'&2mό}K=]]/ ´^9sC^16}'$zǽGb?@c9ĵ5HKx·#_uԛ#)rfm~da@z36kܗP@ٵqYcO<ˣi~ S7z+;am-x[jԭKmB 2&لqاuJL6VS"a5:RLݙɲpeGwg tY~yo(*:mq[`69$U<]Z|Vk>9 #@PNyX|V&RMd mM'rbym/+S%}mZiV@E|yh%_g)r}%#^u7EH_`/Id;J'|$Pvxvj$zYbپ0Ǩ:miUo dIW hCp4X/4 3쐐G9o(>$)EovvfWǝjh*9`!3l\/ z8/-&h 1 OtqFGjMTDO>-b9V.UT Z3U!t5*X?4c@  WIy "SJm-Χ&M.lש`CfHjGpa z1x N\1T) ^i8R$#jRz\Tk0$J*{ Sʨ\лu&e In/ø7璂3p[}{R]WۖX_Ɠ&}SXa7֧[OvFfOƾ7*Z2֊ub\mP]vKYZ }=06o-4p΃6}d:8D7o3V- ‹%lִ0Z}/IzaAdyS}pKic4h2k":QSK ; ц(Waw2|1 %F>Y:PZB8TƭnG,vl.¿ܩ$ɢ5x{('V`Rx`n&K1Lc'kKZר9Z4.sQߌF0Fx %E!|G+ց֠-tOE&sè^|2hشC-\GpFPt2i#W=eKQ2ްMBo/ k}藸 \g?xL+Qud45)sB.f)O%jo I}zDQW{6 pʫaJ@?E^-wA>'П32H`+48ܤBܲ>G|zMݛJzݛM<ơKGoL6a*ZwoٗSxc> O.CGL &gbe62eX Am\.VX#u܄t́^^-L*,$Q$SoD\Ji1ׁ mXP3,%NBNfn:%-RnySD_TӏOC*z0̩&sm$eIl/Q{EѢmOoIq*!Z;{kVׁ = ^ اlH*(_ؑ/J4l˕ބ[mmbM|^͸m*# -8)U㦽׎wlgLmۧn9ce)dh鴁,fan),ҳǷVS3y[͠hm..Bj2{~?lѮ@rpᄅpB~|{]wťwy^|P׋ +oxg6L egy!!溩ZD?ϙ??5&i5fJy7?egRp mj9-߮Pl7Y+ub{.Eo3#򙐭.%s;ʛ?+z*]c$tU'J[NCetېθh$.c{Z8%^GB/ivڼsʜ:XtywKh&!aRX+5+؍zNtv=2swdrSƧ]B\c>bhX6RwtA@(YE:+36!_%9,pMzkof|0G>L3N[ .\淏h& 2:ߦ{H_G*g0=슇??L|"-+Fy14o{$;y}.q>_R߇Ć_G[~H~J1~䉈̓ *M1끕wz*ŋ>PSTܰ@Mfk} n6 `c>5Qx9=4FaK7J;)6g5=|yR)1{tv{cm/K?A!bQL[@&Uɥ9,WXen+A|7 A . b@ rY+{SjʈhA #(H8HWߓOt*䋙V(Y"M뵌!<" Z; =ݏ~[Iߋ9vSn]-U DSPh+_!xta/e[j f2B98M 4ބ-N?fƎVs>K0;_77Cs%Bz@ G7Y*ZfIu|Q ,Rg2D'HkKI VDpCK,%0liodA; dЪ;$f>El//s 1fǘ/ǘkuǡ[cgfPhyU1m!ګ TIR6v7e\w̨T'{Eϻa>ޗ.z:)Yl\ʄ< |;iQ{ɻ:f<^*9I=ݎcyfSiIYI4atcsKOYcP^u;1oCb,Q{zkr:Da\ U(|6O]ز岸sey3Ü_Qv[V LtG͹3p~/gyfse%/%iX܋i$TM='VN)  B=^\2,P YQPGkWxE`!r*6Upѥ:ڹt1&{JW–me,27˦wX+YjzlP~bu.^OؠêF޵^}n?/?`;q!+?5BHWg 븙Ps}QK@3 LRm R  pd׼*է>S`5r5Dꠂ#EW =<0҉|UpGMYR2M ;2 Is L\S*9kdN{&Z:>ĵr j]>CyX2^Y׷ қm`PoKP 5;k+Gqmڟ_pXY_%J1T08 \p]-3mtOw:Ac:Mlb#>[> LD, xxGd4$ofAڑѩB$ Gw%΀жh""yJm!$``d!dAalA;LxBȲK8HSI1qLH HQ"YEK!]g䬸Fw{fJ4P! .&k3!EFNdcBf\4#R+Xc./~SMl%ϫBUU. .[տ_yFa߿F%zYq&9_O {ʙ7Ts#|upg'U:(To*] |HlՀઞqyJ}]6zL?'5fwHF;}9_>KpjCu<$XYHxEҕY,z.f-=UMUQGTR~6-:lT 2A|Wa3LU^Q 2vtX6D鐞1_UW>*r8ys!l06 BK }F(jA¹Gԇfp;K-ξgxB/rKʭNF1/ʤ:јCq=98ӧzX {#H)k _ BZg0LŰg=?0}FSELچCW{ۡk4󥜘v+@e\zԪ_xƓgQ#JUHj>JÙ c%pIUHZ+y@] gҜ (98tJ3H-5yz1AY&<&|!)윟XRhf(*tts:cb:">rn0`)elC,`bVXN`Y%n|pA-*,WpCv^FEv^@&ٝΐHxv! u(e a/c$c;ik>h!޳HF"g;je*kN]JykkAH.T"~p*©LZĘ`Rx/>)MT֔zrrxf)ˈ l,#pЖnS.=Oxs@ͯ/nڛ'JWyKU=::ZYI̓.dD\ rZBd^4/_1S咗/^ёLѡ`xZoW~&8z [Zg7K4m-*us"z\4 ]a#gE N|G^ŷ0F;:P H&-o}%hXzAowwFY>\HBzSB."U9f4]Iq#DptF~cH0KO |ᘒÉq\@t Ť9p'aoPvkb6%損R*+LRق%Ooʳ̲:#+8fWM*M,`WG &VPiR{6}tf7m7L7RŢ>Qpf|Ry8v?Lɸy6ig\aKYܮ*>H;#2#&'%X+`œ`y$3“ʃ2`d191U qд#2XFkuot@fGLWST ^OE?}Ǔy,5GG -Eȇ5xAE62٨:*0׭O)@gǞd!#% rd9P2.Dᅱ:[5I*kȸZ#Sa+fDCYaU4* l1 <=URfǙJ:S 㞖:UrYWaTρ[F: сDh5c< &dIT?Gr  6ب2Ӥ'ctBGrE&-}ɆM\V}Lu!p@ϣfT 2rɴȘdʥXr""t9SJj!C2 oU&fr6%%ըI!AA;1AEyrKί7r&>luş,e.?*,\wd6OW}Ni`َrBm5#R{O/K ž!Wgm0[ؕ\Oh_x~ŀcRK &Z_{$]EBd1nnytFR]e€5=녣 >X<]TEE;irout&99S!W_[i !a~yA>ive?O3`̥R^;//ٜ֔}{6(͢M'6#$Ϗg\ P>c;1b2;]d S1# qC5X#I$fV=]kq8~-_G0朓"2y% nR7h H-^-9K9XktKeJ΁H](M}sB87AA@ZW䬐M:t<(Ig/L/ʬѐfz<6r@(>QBYu$"rAuP! HB5^v& '{9;Ҙh0eCSC !CVJ)"d7Bzr% Ўq$P*&2',1=RJ޵6$Bxp0@e!@vA?=eI+1 =դ(KMYM%f?*VizZsYW߁:&pXr""r*$F*1 ( #R _ɰC9j1Y;hSދF'tP"DL-iVAvF9%R.#R2WuP5nIod5˘Ia-}bX%s@KaJ\PGZf Q@0-,G58es7٦c $R[OX{T"X</t)@̢s &H촭=v..EZ*hYP)(-Zk + %"0P`tk]Sc6QD;ZjlY;Gticufef>y(%H'u@Blr`JƬ`&1Y`ϖP0t"/gZ&Ks"|r !Q P6!rNg:A%QPk 2KA; /ӠT`]ݐ~$\.%6!`BL1jcQ, | \KȠGgmҶ0`t/eFs γ%`;a*D10Y/UTc *z!0A9G;tb)ʶ]G )#Ɖ[3'Tfi%!FHVEnhs(0 INY"L ,>fD!JOVHFrIt kL':?#I[mMGܣhX '9И{”0Q1J#Z lIG$XNUd 7F43#%0+!0{HnPrk)szἡOc9)#\QQqܵ/Q$7`T$~sVa̋e~Zp$W?sYLc:տFPb<[$op[e[u P!cu#NXԑ"ޱV`S&"aHozRP뙢v+n`/gW%P0 ƁnMH tHys}뗴x.3ױ0h~R)D),|L'jPjE3ϣ?/v FS+}_,PgLjHٕ|AKSᵙ^0-`kYb;L옹3y*J ?y~l$yYPXcTبlȨTL.ݳp)#. }p\ igy` ;$$Qbb-r Q"-.}kmG?8PC`Kbx^ Z Eq[[Lh g`t~VGjؚLP×xO>,bsve 8X'€>\ĩ0HUS@+)F] `uXU!~`.G# \ѯ?lheɿyMV0#)6gF\s6L 0?iAc #_@Q[C5 Y4މEb<\m \G*˙8g.6ԺZsd_'jtkò C!fH0_p6HFZ{1l v>;E&k$T3AyR7rK!K~\Qv5% âpe߾-Ujljn1G u^btt.ܼ4ah)9t_)9`[V\іͶ3 dr)Ҥ~Mn[[!rFaJiB*\`D$ ll~]ѶH_rV:5%`yqT8. N>1(NgETJ7P#d =!k SD㔀.DpCLDn,,@kH4 GZ֜sX^=0>qn.O}R[4^'/Εc`}O _u&\}m|S4,O4B%5Q1%:ƙQ(6BbG 򶢼3vL_ 1MfȎn1hi,%f 7Bs>!䝯b$쬘}}.>%>prmi{mV1"JqTQ(  cZ)"V?"D4[kn+tB ZbZ^*Cy}*ب]>6jA46O j><TT"~ٵy_ /F%Wŗ .U~hOѐ!DtX-zg՘IDk1hFs+ҹghl:_R bб$>}(Ϙm85$?3}sIn pu\SmtA <5v!g[펪g;qǓ΍pElȭM+* I`h{թ+UBv'Es(]?Ϋ۫;*׾u4 3im^}RcWf^y1oVy[ꫯn?&2?f.C؛p] ->Xxٴ[E~]9˼?ߙlcg Ec3&!-fFⴿBKBS\Ft,q˴]NSex(a_ԖA KlFȈF%MLQLȠIh`I!q`QWӎwxMZRa6hU65m gUXJr&nNKcRBT䄥 )m4U aU8Rn+帓B%ZV֚\H*SYN2aSphA[Ɉt1#15Tn4 $ ],PhSCc|9n&#G8!A[Q$H/˜@j0h2! *BJMӍLMӛISՖ;!z#n{s12 xj$EZY]N,hcv>KF[>9c9G-&\j 2k}5Q0Av#7 J[&p2/6s/3xH.? gOp|yVRָ|2{(uɇKӁKKF۬H& ᜦK*Ljd<)V2""&ZH0<+Yn\eeƲ/ Y_kv S嶻&WS?|G2v ==rWYX:(foi 7idUA;L6i ak"enZPb7% &9Bo%4sfv)sŲ4t^XRRl:QƴA^ml;i+ըgmm>y#[mXN =.8mUVxҽw"A+Vl?{Ɯ[bm:p|}r閹Z&L{nҌ 0@8p+2jL!1E%&3=CCgNPiaNhtk1juX}y$G(E)oK|@/O^քh^Zވ^:Y ,DwW!?&UyK),„jrdə!\KQ-Ck ֎-"2)WQ*!%0e("(H"sI%̉!zС=!䐤2 #5ڦCP q`8-p7a֚ӐaXx>o\RpJJ^ۜEz4,JE2x@ @[qrI!E~CBӫU7X# 9d8 o.9)2#e+RtYFA/*,WhN~2L:ȑlQMndwJ!lK̗С}}=O'ku-x[ep6RHFd"< 4C CdB̍u4L6>xLTll 3b`lJ>C 9jT 9@ȹ(A]:ER3 V64hVZ_혜 Έ߇G# N=ּTR["5_N]BFv먜qю#mh\|Iw=t67zc)gk715Ȍl\̴Z)*BGs/*ɞTc6~{@{,RZF6,Pxkvztlu!69V&^#cwD-X.3AΪd9W:3SM&'ǵB `:3t2ȫrϋ{.+64[נlGAetBQ D~3lds4yFy5L2& RXbOjmFT@( n߰Q;?[\ndX<], ZüVi|{adw1NɥdoM)zcp[,F3䬃+zFv_!}3~?w6i+U+lC}/y.&' = f& +8_L''snho?z@4z-?A9wZ?u!2o[ H>tOO9=0bį_vBȾqi-W3 ~xy; f5r=?twף<&eSp}|69Tٻw}/1B{Hyַ:B]= D_E^,w}Ja Z:ڷF}mpzDtq9:{D0Ïνn4~w`D8\NjMW5n e'cg4gˍ2hC9bc9Ax\ۢϞn>LnDZV'~+O]FQR].sY ` r 3Pq (QQo}ji^'v~,K|Ƅ 8ì.2`\eW sI-0a,uY:7K* ˗ϔ wJD0fLĬ_6`|j$ϏW=p*v#ҌTAYXf( D 7hhu,XBGŌr~ET}H7QrQN29! c W, ^)`A:Fѥb Q9PeO$1+ -K.].#gQҡAAB}9ieG[{|b#(\P,L*:f Z#@3w8Urf IpCZG\H]<) O dŒYJ!b9Q!)ŰQX $BYƉIi&2 +L2P!=&JYX#gC9kQ'noC6CH^p81eghAl:ɤ!!s$"0\+ rP62/dyh!)ż̩d^m\p2j+Bi_]\pqZ'-wPEZy/#ONZ HR7{Mi y|ѐ%Q Uq 8jl0xe ax3mNs"Ph3L*`"F)RTz+Gb Uچˈm%zѳ}X§e}*kOZIӰDmVdFA7i}1dx ( glUH`TKbX;]}X4)7P:/H&;*64&r/ ?TEA1܋{&sQ*b-@y %HQ(4Bj]J\5-7g'V!#-ѲC{CaIBbWx-w/<*Q"mR&B1S]ZQ@h s&GƑGP#SL~BN/g=W / eP1ƅ֒-]w :ka6eFSF B*DhTC6qǖq8u8KtrzZn΅L㥎)p0P8,Ӛ$z8*gP}[i}L,;L2 4e SGbW*L#Jnda)9?!3HICi-HC:Ao)%=F (9?!Y?uj/2 A)Q8%@Jd<+ >,fvr<.IIa?ޟAʹIF?<7486? @sƱu5#'W~7r8[ȅ7/ pX\9w MţM|ǒ?OڼWyH.Q(Cc"B.f'K'it9.HI%r.FZ䋔4[ۼ ˠf.?Y$,eٸ?RpH2ux5;\"e7[$?`Y~wn]Y#!R80t˃ pC1e1&Vҙ?>]-=;HŒCRP9X]ܻĬ6i9xMo$<ן2j9O7iQ]+dm7"6箵G툖Y?ݖBoR4i_Vk} }_>t)?dCݒSz9D˙;'"LweI wyZ41v0<øa)M2KGk7x(6I)hʊ"2?˔08LQJ T:S7&Ԭ$W xi̱"{YmCz+2tmĤl$\JwZ7Kz N%x qX!1HA(zK|ƥ_[pz<,9Ts$!3lT/Z Eq][Lh gM|P;A6'^sGгZQv؋D ~h0CWʏ7I~ "*)dFRTD*/>vU=ɵ5VNU)[%Q::2"[cR4#fH8R$#Y@SJx AP2V'Yd0rʘy,_CqiMG^m(anTܞqq풀3p(U]^}n><܎ D*i*ʪ1MMfk5mK59AP HkOH T zGZ\UY"qj-A!%̗y_rDMT3cWjV3V^YY.rp+%^w*ҟƍ,9-?8.j7:b-ѭ7y{^ , R[R8,MЮ6txyյۼuy7ހMwS>}aֲ7ood =!TfiX{))Jnpb΂wDðqI>325*[Dž@w{Y ZlyTR߬y \ ꑙ/e3g̘1ǘKu5|53ceL1r D=Ӛ)XՇItQO}N`WD$mXOV&s[*r-JɅ\nCRZ^iS^ÈV;)=`V|o'bͽCq6{n\N7-mÔ^3@m{V^|Kg?yKLLU]$82wnԉuO;B9ad\Վ\'ClY4U*8y{2`}|ײ&,xг >ٿZ7%<zg=ySEM:ug֝)*z!Eed%Ar^zLL+kKbc'hy Ȁ #Fa 41EOD!"5;&ATy+kƳ4g<,=k-B 8р"x5Y0"襰R\[ҘT9a3~JMBBX1J9$PD 9Oej0E$1X:GLhH: s\SNeFD!̲ f4Yr6=0aE"( LHNfX#4ATECHEILi%ew9Y g&ȡ%z%)u< s{219Z_`h:' T$ԧ7nmrϏk[jOp/uHx.&ӻwOO? ?bj\ +?eU ]˨dsiHE䚧@YKaVY-șb?_;ml{>.8PÁ$`=k"[šmE lX>t}B%ЮvuZX֢=l8nнlqYkY|O?fվn 4S%9L0;3R/XE+ݿꥠ!_ yi@Adr{U^r-# dRKJ1z2_%xB~)eŘFۛnUi`YCݓ*lj8'DshQ<9N4ljffq9N4lj8'Dsh͆j &sUޥ́W9*^x2z bcrD^`Ə+5ٱx&j%=tD+ zjIR9N49N4lj8'Dsh=͔=$ e\rR@H9) a>L nbߩ"F5?Z'wXFym4l$^(ҟƍRM$jƞZJX<`y:t M}1UVva'qye0bQG"`"RSFDD b FрG!eLD+wg45Acmޕ<7h'm~%MI p9޴,Zf wLK;w Pjyl 1ocri$GnÍ:qްs_4'}ڑdh 6 Q6*U1oOlo/ rG8,~1cnmS{>$΁.{ tфUνl;9S0ZYaJfq(0TaT\q-;[hywI:P ) `)CAAH,aNĨkCQ{C!Ie #52(B;NN&x{]G&RQh*p­Dj."bCN K)Czՙ }hlA/TE\^$oS?r\o(reƓoUJ<_|,0\ΎmwY? ?)QN}:3b߼_m?'J޿/ꎅ/nt]hKU4^EU>>ài{a.뺎;SN\bwz({$[]>nEy#uCt<QC]>$&siL8NzԜ*s`)Po:of xPTod>p{Rҁ7?m &@ TI)@d\My0aP/5_DT>Y= R!1mw@pAuaa!&uz8 #Z}0S*N-N !O.P~b*\IuI=&\#ra)4.rXTz5ᲭY` 5>=Q ,>͖Pm`p~jGqI _>lj/o@;- N4!7@'0.&<BfDmq@,  GP@6uf;$;r<|W\mD ' ^>PreK9A]U%4<0Km*CA ƙARLrNv`Gʘ@TlE/ J9q%Sa[PrpDNs*!jY=l7X6'O+"KB'U^xZ1=#NTGOƘM>D.ǒO8QNT | ֻ2wa9 sr-qU*0Hbťe, )^aq^X3fmnG!ǹgخrҞ^_RNd?r^+o#XS1Q~7~%Pv 2hrtNqnc]Wfrqxs7չ^RE33x%7 ׼.dMMoߗ1|Zn]͹ƨA<犠[%qbrQᙹV d P`l#PvYxcs֬)u5{wli~ş (z¦>ƨQ7%KRvgWE[T&_ۚJU2A8ʁq}Aƺ" QSe3U6aq\ThNVʃs5Y%l a}b16ba3(eZ !j`vkR11[,8,tL48-GHHk=(6ttuEFj/\}߻g'xǓg:ZA ~'7i'H?2S ;QuvtآaNc\qUc*ܰZآҢPyH>&c[9d`;Ѡ3hT)ф>1%^J`"J3s:F'_YꁣJ#g*)"_@Di2xK<&c`hֳoQ|.ʢqKuvcwE-X "ksڣGlL셃/ h[W+y"NrS_(?21)*WLQG SӉ],Oz]9G)4bQp0zϓi/rf\ 1>R 畈W`4Q~:O"^ɇ ~W$vzm|.`:,wIw1*f@a9墑)6LZC@gdg\@Kt25,G2s Z4` mq wꖏ1*ko46{pUJ3AbD)o6\6 ^ eg!eT90fL\b$o&k *5:OP0B-ޭio#G+R~|1I-`&o\N@p6fcϷ _z؍K"o%mzޗ{w/ss?֌JȐTv}Q]Fo//9nr̆/(#;-7Wsz+Z[{{Q9o[%Z\.<]qY-~K\L1t۾ _GlJ=:bۯЭ>ОbxutV[)::u!xL`=ښJh=i.Znc`@Xg)*!¡\y`mtE띋hҰ&*2u:vk9mM.BPT]2ZFEUCfM*`L4{ YĔ0f gDZ|Zř -36wT}}Vr=]dt i`ٽZl>>(jQkݮG-6*Ǩ0jYi}D2rQҺhb ѝ]eD؎s vtOZ *֚\t,9Ѫ$b`:nujdb?=YviIc(! GBECu$_i&EeMAZr\ڄ̾C7 Mq+h &[!. iHM1HY`lI AɘyD`i_Gat.[Rq]_d)("4*Ucor&kٔ=˾i;dP%Y^0Z0Ss^MIaR\Ƃ*xg%z\}}06FK@@%V@\IFXAa kdcF O/z 95Y_ؖ iO<kU N[;Vuh\5yľUTtq2In͖d>As $$ O@Ťbəeh2Dp`N[k"aFI53R;uc@=ى&(APQGvs xJz؇$ӓW HV9K*e<+HN^9|x ,l:_&iښ:~j<0EL 4oD-?6#}|~TbD!&h [5QE/$ ZK6>?m̖^?{-= 2νn4C_9L[nz|4iNKIc&~YBK=rvq:ˋ?7]y=b]htvQAKslVun`V?2u^ٻ|@q2.vg,ij6>Ui1gаIuD|{q*%5Q88O*s[ },CY苃fγK-LoYa7Isx]_e7A<~C9oZqT:'L[3YD:LCBHbP6%Lq$7u~Mr,jA֖MY~D,?.ZP'Ũ9e0d5=YU,|?/h>3bxĬ;6{{*]nINz)>^jI}zi.'^Tɧh/(Qr ) T:FR(YͶΣ^Boen$;լ 2AX1=[FbZ-<w*p0x!}XĵI/ ]P bR *x(8MdS_m _}}^7ъWUmCb|zbŽUn 6@r72Si1>ڇ6"b bQ,>cqVqƜ(UًDQ ۈSd&-crt|KOr/c9hBƵ O{Fx}pp3)(ѯ{g韓5M&lw*kmU/899em(W7lk7Qz.M `c#ll:/mJ6cF +{ț_>#`jEX%`Q΂N~+6|m)^q`/ZNMcu4E%GJ xR-Wva#QEr&Wjm8qdP[tJlPk I?s"Ew(oɄh)iMˋɧ\D햍N?MzCh)UTj6hA1@Sj$%oGMd(`: :H3;x.a?b>>2aM1!Fa0f/{s~;ccP$<}#eӇ^fV`&MwG=1]%h殺(h輨CokϿ>}bEx. M8kYٌ(x(}ƵNT13y֠zYG_r 8Zwuw3|"4oZ(]>7yGn:.hHnQTy$m+hou ]c&!gJ{H_dcwnn|u>e%)‹zfx%iRh5鮧|̲.|=;dBl)vGmOm>~,8 Ӗ0kW{eӧbƛ^j0 kY7'9^>7IY|g%McݲbCwkάn5ݲyqlq=pY-2W f sɲnWGV ^0&*^+ְ+ItQN 4L"3Ik\thu)} v;Uni ]Vq'puq^MRӦp kqz{Ь?6{) Gݯ V̦Irna;MغfV{j3hwoclUI<MqN^UiAX'"rh' ^{o<8vhQpʹqao ,p6ב:K,2Jj0e,&R譖 ޶9fI"XTh!oUhA^¬dTN~:bᇙ%K!Ewn#$8,]=Qa#uBE/(d1RYKi#P~鹤I]gE},ZG3O K %6x#&m1\T)J36(BEXkj v,@ 6UB_ni[tJ?*0 .66D \of8ZDV*7/Ja1*0p< !!G0+帓@.;,aL`1"h+1#15Tn4$ ] Clдw\A@s{𗒉b,H^1Ԍa@2! *oeQҩH &xYK!Tlw**8c){sl*hX7JAlR\^R =(_!լn1,? }0Ԛ1d%8(&饊_Ċ2[⃪]W ?^5gmQ7a' 6I6jQդk,9/~@S|xSq,xG08Z}kCa2-`p/ vKxR9NPOsܜiͻL̐f";pM7U,Pla>.>c'Glh6R2Yw{( Fq6.m/]%B>ii)qCUuO y7/wt0z_V53KXB (xaEDڞ(6=|iSq^ޓ('-V &E@RX& jb AGvU`mL΃螌#bRۍ l7U#t+Ks] o;([AHUV9Ha0$|6F!mtF啍";i#a{(BkM̨9hD.&O%Z=QKG'*i_`ia Mkg*(&TP"+L,XB| jȖ>{" `K-{uB1RRȔ Vn9D ~ٳ^(joFz+6^ԖiNow"uk]{]ZϓuO+xֿ: >龐v>OIVxh\xU(ʯoI5Rs=$IXE/_$V?Xo{%X͢MlP4Nڮ7p_pөzn&njup2jM߅dJ53Ec ujL5DY= R!1,YJ!HB@NQtbuk͎hq`D/`[u.yÕvВ:7iD0 KTtrի ج>U;EOR0l9ph7Ֆp d?U#?_MUP2ZjiwN)[XBI'as+_ 1|~M"oqoz?דkY=  &!QPgmS؉W mdbǂG+{8r'^U|ʹ3TJ)~ Ի͠(\Ik2X̺Ny/Jwz9:Q>g6^ " oXIL6?/j' g4HpxW &*alɌ6>imw1e3cf@lwNY|X8]J=FB>yK1,{zѶ 0)z,bg,hXX.dA 2^^Tyvr3VZ_ ۩~vNoNJdlsr [r* rqj"z@EE.sҙzCI:l$ҋ/:<Ņ\XuAfTݛ-i)DW1$(?fw%|T =nSe*y\&_%&5+ֲK9$ИZ$*|5YrR=|Se';>`y$(ĕOҐ*uJ|(,%Wk (we 㗡^*BeeW^{dQR[d&Sv"LQBRHJ1VӹWUPW\k7_Qwt{1W5!eIQp\^Y? N/sh0`|wxIy؆t_[B3NgqVoBɕeNFp$ft#*CA bҖhX_kbwwUboy Eʘ@TlE/ JRg"J2-G4B\C1F: lp6HFZ{1->c͝;9q:f9W;&/VwN$kL^Rx{ɜF*fK֩%o­±)Sy y{&5|KfHgJ5A$lpo?(rh+VrM=X8]Li&τe-i"m+Ѽ )y0+J(b&Ҟb!Ss5]kkɺ= IOuc "i 6x4x)-y\xBFeǥ/-p[Z.:y/T%ӄ"׋egŤq <bBUt+ϫQ_xD_!C/yֽ>U,!pAeH`Ŕ*rs)eSEC}٩7Sv"'{?99yg;3Z>>]m`i˹:8|׻urϯR:.өO=S[s8mN2)osrK)%KӠq]i4vӢ^`*HKvMln~}G;"خvO *1/gzZg\c'vԯr6oѯlk(z,F!Inw_l,hn S \8=غ /`LjII&E/e吘2ZSZFrW(YbupTu,X4r!5JJ:2#pD҈쐻 *ϫxɸKK%|qn;FZj7b8񧗾8Zߒ"'u$O Y»@fVҏY+b1xuvQd4"~#bRP{MMFR&:_Oy$<_ORtK>K_GO=J8^Gɠ. ?MY!!Y 0c!j]NA@]jk.P59b'!E8R^ I8 (`"Iʹ- {ȮMv~aٛҹ;|eRg-=}}f?'5/mLFrQ4dB8䮇'SKD*R Aqq Zg-'=MK&RRR2 }Mi༷1qYVg9rT9lmcOsǨ[n5mE_vve@먘 ++HHsΓ'8rG ʍ+P=v<7HqEJiܸ7'9'-s/fʹ+)n|7Z$nxׅ;+L$m94+cc1F&b*3f:*ONn#%S! EfL wVe ZABa> pϞ7!SYc:`@]𥞆]L;<^NT+3Br6FLȎg4R-0C)qoT4,p\q|Q|'K"U)>g|+B[B;D$/ָ? <#C%[(K4TZJ:PBC(_v"䒖&@Eh62 Z^X&QJP&i,y'tLb:G<5rD))-<{XSO֙]AUwڴӦ:2ߪ/U]zK 6X!VXrcFy,s<9F5-.\7K4`k1o~Y4O%Osr!F>|[Ç:AD1%<4d4?p91W☖rl\ra0.?d8&3Se5MOW.-nZ fe-Nu>nȦ{vSZY Hy/m5-j8n ^ ho/a--se só-t]]ٰzxax 6.oL;TݫƶSDŽzҖZ6."n[c+6=oz̬"Jui+qUc6[hʬRWw)6iD'<~8cR2+WN:.f2=s"k;-nz*@PQlǒ;ɼnVݗL-xԅJ+u:ױk>r=4_xJkW錷RBl eIF&Y&șAnՁF:x.Q9zL Knf]n-0|ݧAazH-+=-1h`4:GwMln~}G;E[܆=-%-gzԿISOTP6 j6"R Rr -uZ)ůܒ1y?0 tIsKo bPxL^کX>%,->֤2D'R)TFl:g.1C4hW5688kƽĵlcCzzW|VQ;|?ԣLxr5-6m޵#",H7zMVoYqvvyi`'߷x$bSsh"X/:: p]2Stk#/B|Nt܀n%~ݶoCCްYpP)X%hlZF]Tv5S p/ N3kzJI@T>9T08WHB) Ykc F`i:LW9Qx`qiG}f汋fi#Zl=Z 癠hGx0HU\PTUH,0 $BFn:;(9ȝ2:3"5 ji R3\@$0ђy˨-DVN+d= `.EbjZ,i'|)I( jq>ioɘ()gk3EA" 8[){Nhm12[f*lK E@HQDhhv┋fC((K~-q9/9'8Q?_6pr0էuǨ~i:uqǓ~M?H T?/lFi2|>8wƿ2㫟^vWxU%yjZ(^M+;-{Sx̟U/~}g<`6wv%Kes`}|H;1;tOXఒźV"A폮fF{w̨#p_q1#|ѤeA+ ^NXZ?CcG\0Oݻ t?Nf um g7_;Z> }= 0w~ޕQ*Uٷ=#!̰}c:9M3dξ]3SiŭdJ!;Uxb|NM/%S-~rŬ /%˝-eO_x{UR@Hz9 O?& c>cuY6?{z+-YniSt ޠ G5ln$~$̪F_~W˿؟i4}uϚ:G7z6w[UFqˏ'L/ Tf'Tq^k"7S}=<^aǛ9Frʺ,zS޻]g>|m ϕP~Wmjˈ-wjĖ4ČΫHՐ}D#k3IKya̠" ].g2kJDI.;P` ƕ}2bb"):e  ![؞gPCGe8@%h<9Fڽ b슝R.[DJ*yk;"&EVeσOi h]Ρ`%I@yh55YDt ur%V0?8[T@Gޟ|ZY3׆Oyj\zoO >grݸ9#љywހ{דgqGEp0H D#aUSI5<څ#݊$Hf }p${e끲G V$ :2df$A@Y|u) `ҚFCP.!C]$8 {Y$t"\cL-mrb){}nFf lb:u~fb;ȏy ԶZcb*B`QN&xQD3P&ٴMT5g9/}F6;] LȊ`K5zVowJgb2,LV^}sykx,Y*]w3BOٳiWOg %FYegT%UgP2PMNq]&U౻5iGE]zJ$颠}}X&w&-Gd 㵫nS튐<_5YV0.HbŧDYB)9S$$}a8ީltGg6~~2jדO2K`,82&'9N!:T|&_KBQۣQƁp:HJQ\̪XFmE*eQL**cF[YEDnu_cM]d4'^_||y|omn? ~xN^I ­̎S_|;[rP25]%a /VYȁ'0MIՒ0D۹&YR<k2jkJJtP֦xFL*.@9RțT /RWfl^ףY [͌x-a-Zx-QlWy\jٱzeC֠cOO?$B,af>h oД d k`r fthl NvrlX~$LBÊǚm1 Ĕ)b6L)!QBI*$"3loaߘ/t1gC+HLTE )DKYg- f#q HJ [q 0hڠiRZvY@D2KFVu 5`6{E 0 PmXI}0鷫~JK鷫J_`mY=x?B'\Ef:+({`Uv* N|4:h^?Y< eTF'#SD%fwsNQgt#hJO*cJdrI&581ʐBʃ/"H@$}l]28U*I?r7vxqO`%&o=yT|yA(b<,~_)Q?T1ZzmdĔLJjP&;'64˧&_Aw_,91Fkg4;A4] 3DGг!t0&#3r[in3T(,dQ@V܎ <IZ;(!:Jd[rk YܚhӮy}0;ZgmvC]xV$~}mW k-obW,Ṋ;nں^=x6IX?}ܲ ]7|xss-zr|V>gfo4wG=oxޡn}5R/n[eoUBE)SN%Y7!=jdZOի(l6 1<-6Z;lgRzY?Q'MT < BC 륤\qIP-d"zҊ7rP*Gzu?]:T}0k-7lϏ8|@eJAd~5@vP+^g]{oG*% .pq6ƂOgjpH~K)QТl=8==5UuU7!S&aJ{fQRk 8zXr?FE5O'A{,6O s*l0VWʌfDsa?9+U 3v8*XH7y\3J{^ONj lJap ض{s0{/k 71Amd *2w ɰYxg`jtFw973*xx^ FhսǿBi5 ϡOg]<-r +ǩtmhôT@5CXT_LMK"RӒ@OiIT舮Դ(F8!G./\ +s5B: b9[3?*ޭDTd#N-AUݷx8iz_av/1)/p~DuLJ,~{+2kUXi|ê P mdDnfd#9Oin{1Ip~[s~epT4@{9C'Tq.4[呋r߯g Z,?s/"p07iϧun.__;~/o"@A'ev/mf4՚:FLJ.T')^HQFYbХS-Fv :INQ:Ib72`ˆF%MAd(idEkM-\`Ҿ36.:Ʌ6Q'OyQ`Ԇc0Q4wDE6j*`EKajy7.@ ;-I#',u!<ZGK!zƌƁZY[ň9+j$yrx5}l=m)>6&]IA;[*x֧{/ d jv n ܊-! oQŸ|gH{a4I^A(TX%7iQh 50(w+Lw)iL>ꖪ@YЀ$)cB{ND4hPHaO>j$n5r 8M(rneZGEdQ0AVJH=h$-'UFI7<(sèyRF^V{2 2 GO=EU-f6)85Rc,fޗQU hإ8.oTkI8-k*qr0<=MWjtև&q%4u6Cv.AޗU.ˆ^FcD2Y+냉cDDL ` Xy$RDʵ6|=l崙gзy {]f;1my )vkG}p//rԿfEur^ ,Rg2DO'0E4^JJ8%Rb%0liCn䦀={)Y]Ԁ6 $тwwa3Ep=.6OuQmOם\\fIu',wne"uô^mdJ#A&mܷ ?Ȗm:..BY3&'-w#iur;K#OfݥW$$ڔSz4A|.qA̜whVytqb&bmIrUjtmv;1j :w\9=0w`i*DDC_E_?=4nawRअ+޴[xEy"-H1S)b"jyP=fg.1!3\t/گ\~u:Biu3ԾoXtS;J*2HQRe1Z+,T?-ՙ*}%40E.avU ҵjsJyBx$M!˿H?_*r|mşQl/SQ'$UjjY%r`h$zHKc%"'m&m~{?d5uE%O}Q !6amd`UY>4g3h3ws"2P\:xMeFΖ5OT(qP4H[ggB촰DNr vrC8n.ǂ_ZSe43dg򏢬כ KS G?|U逬m _=8K B8^tbǃnu]qA88>WN!#fvns@rOuSrwea~neo_)ɍGƅxKH-qH=#48pC2`KZq } xA'0Œ Ǣ.{$Q#x󷙇RÓ9t!iB.}T9xt>"wL`NsD*^T4W{|JgC)F=gW"4Jn+?1 G.PmcK%H0J$"Ũk+a"R gIf\RLw9 "e 6{pfq!^=زc3?VedGemKdƲfWaaȡZFk>zaQiD…P Y+*ot )(X6ʦqp #kז[*X!TKdp!zs/vKGWnY˜y+>h̑xE#N3dTTQQD)cqG\G{~#֞Ϡk0~2ҸB:+ pjg6YM&7R;Su`vvHA3`L0Bw9Fg8RjSgU ]0v>0wDtp%ClCc1ͽhx/_AE"1KKّ͠BF[ ;"䡪 k)1 D(3@R3{勂cqUg8cN*&Eq `lQYvjx`{wkයȷ^Ey9/ :cú;9dȜWYHdv|qnPI[d0D#Ŧ hl-B=zZ]{},{htYF`^+2❍F"آZ)-\TA=g<8O YcVP^lᄮ^]0Լ&὾Mnп3MĬ YdN(DG!xI%^`)̱R\6M܃"G)pF|B*'͵pNk,UM* &>8#xK8֏UݴۨUV>Cb,/:hHك'mѣ᪔|оX=ѰKV@Q}*2kU.A'Hi4L`9iѡĹ_kSeWC{P`uӓ}[5|q~ɔݢwwIbɫ>h^uvW[t*$*oM^0P%sr&&KEG+ӹ%Z6ZFˮ,SlL+S'6.Wu%s\,C2&n/ulPX[HBm᳗_ n g@i8~g't}IӓW`0XI$l,d0FY4N-e}vUPܚ83RUr<'lk6ځ'aXRd$tH L 0墒Es%VSG* *.!刱c_^)Ӆ<"HC&/ĆRi&I7~Vڶ=57|ϗ뿨MVV&[!. iHM1HY`lI A9[H|}qv\*XVR Q~+Dh Uh-ʙAggS,ΐAՀjmQdA="iNJk.y6%G4W." `lW|0.EûtX%W t2 >+(592V0521E #ǯEz ;5Y_ؖ iO<k[_{latG[=!qdujFWk0W+];-Wn0ƽRn7Z&]mGnCZ:P 0@v0Q)\<&${dv(Ō (rNX'ft)O?OҨ/55m.^0Y@ x0Y;QPhn#*Z4U9"lF \,7(,ňC)8MRk_'OOYk^BPlH1?m̖Q?{,# r{ג+,8Hg/cM[d2muʷrZyM<%k/?_\E擮||: @׮Lr_(: W LsgY58j`V?2K\cݭ9d$\y/}ܫ$[*-N3h؝1^YXG<דVE_~V岈>KbH^Tz$}u2xG)%CG8\~jz1>$n&HkD݌͎O2 A?9׍8*z[ۍLۦ3z, YD.L,~~7]9'[UGIx 5.)eNRYIe4c$k'y2YG|mGGwBEtgQ4VoMFӳgmEg哕Y(p-zh%r݂gBp1A,]/}hDmOVª@0kp.O LnJʈmildtŧ' ~="b]NO'/(qr ) T:FR(YͶΣ_o`2bS U ('2AX^-#Tb&V {A3OE|!c}8N_Ic{q릗ńn1UUR%Z/5 J,J> 4=\7[EwgțhU!b(xOCB(cMfȶ Z=W+7{XHO=Ľ}mQ ̈bf|*H˫VqƜ(UًQ {[lrdnnrpPV*̧XZ`D.tC\ym$3B ȁw7?}=YA~ӣ=M~lh w}FZI.kY= ՚v֢bd;zZiYPQF M&})ؤeEJX l$ 6C׾}87,ٱQ Eԡ*>}Z¡zz<;˩)0h zzu*2B㽇ZJu&& [TQGmʙ\P^$|ɤ#ښCUŢ"BD;T‡/N>nhy1ص{S;ݮ7IOy6]?|?P,ߖ2:Y{[ Tg!X sFa09wN6w 36Y=ć 7sËYߎ Lj&Ov-%-nZ:XLdY%a0tbtآaN,.}v7wmq1Jh4TNR<$\H"ym,CVDQ;4|htI{/wI^4ܛgl󜨤\(V,;{I%hYhUӽϱǔ:SP4̾XS(Smm Ѹ4F\1"4ꆆGőoAR6OQ#}Ͳ3l}՛&8@'Z b{ Tfyt+UVUk5IO|_a,x{/7w (,J 98jK%y:hh5 A΍OI. /ckZMmL.ע9'bnd5NWҞ59M>qwax9;+}G?MfnF ~zi[Gr^L6Ä́ o|ÀSӥ bӘ_»}zas4">+>-ot`u?_ .ŵFp*VxzWy~`3jw5/5_p޷/l#c-{ƻWsޅOOn.V6?UM _<:{.nYrztCY|k;_h_9g;[lqADz.ٸ>x"ٸ]'ql\/\Z>g,ږcS>ZVoΏO*pmz" «hbYqJ.ŕ*fN8CgQf3=5NLyHd\p sxMWS )X]H1f˲-j줰;Hrm]ɱ l V6H ňj7ibSZ F {fQ8f骵{6j[YRF/ ^UY'eg\ gq_UŹubDO#w޲-X,W,XkFpl- 2AseZW2iQ`c` s*phS+<4\guz1%,| #BU#v4#J:M+;&wRgՃ.ׂ|+$ g/XF U1;߸sZՒ1ve-­vj!02GV_4MVZɸP 6ҥrsF[X#gVVZ=7Q.2P(Jĭ`AR؊7^zixr0LA_P;KHMBcW9&)Z4*iv>v_cj-ʇw?'*)ejr擠%DV_RRI,QjOSwV9z8Q{u\O|#$!uN16"6\-7TI"r. 6k8~""m }XmGţ:݆3V!R1 ϩ(i3e%Ա>5Z׃x,([0kұ I IJ I$$nܸ[-'p|9l:P+6ƭr..v$Cs^=ʗ|5W.m嬾[Z- i_]q>=z6l pƖ:ڢms @K6RoVz5C^y4T7_5M;i75ayճ)wQXw<Hb60z`n_tTN>~}?:>Yd퇣u{W7'#&}_o.հ>nBU/.?i?MMvR ͫՇ8#=Goۧ) ٦ɛJ#1 oyQ~wM-:qS;t ӏIO=|l FMXHǓQmD_޴GNqy>~q< >я_Xt2^f2z `CͧaM{EA2GcZqgmNY8 O9"%Cgś#W^zw^?ޥg_% KʤxfRȣYv7~Kn:.?]OCa&ΣzyMm؜d'&fūW vzO]HEO԰ImC_o6C?~yz &%0{=\oeV? }7 VE> I6n_n߷~~n ~&&7}b]$f/cWv?¥O+;ǥ-Z^<,UIkj Sds6Tu>nzͶpv.q^H&wf)T@LZ^`$4!`% 0GC}8^eF/Hfc>֤ńnzE̮Sf\BrQfVw_3@?.2rs1QewM&k <ѯ014.9[ tK2ҙ}m`NaFp;xeIy~j :-_mq `*Q虖p,&'"a~nx.#H9.vB>H+^]BaaUr&Uvw2,[AU/Med(qm\\/ͷi)KB-縮#=rjjYuz NL!'F* J:G)$|EW Ey{sqg>.l~$5خTICMiI5 C;&ap#/5df&iԔ#a IsFm9W[u=<~GusM:Lғ{PjغMžDf^v)fj vZNB %]Gbiťsj(_Vu!vDɻ $[6czJ8 km;C$!nm p30f%$^3.I%k ߾8ztW}iufT#ڥl\)bj677 0V-ܱU0˔ !\6R%zTsl5`C)."_@nhk?Ri;-攍DBxG@e2(-˳ְ֞oCE(RQ U (*fDvq%X'-*t_q jX<&f 4܄WgA'ҒBYA<(U`!2J F|]cAg c2K,9,w-U !G(ԙB}P)f9:K,dz$,*`X&Ȏ.Kl pXJ̳p Q% yb= @ZR \F[pAU T,gQA]@aafWVʲP).x.`~[GoAAO9!ɻ 8F#;7J!E]% 0I,.KD;%#!\́ j4"fBEN-2ĉtB%оmFS%IP YGTR%h]A:XSH qX|…AY`$r VH܇X0ҥyb9b#༈B˗ stXxa clWES Vߚm 󦱍`or$/k$S`~Xܮnݮ.Av4>J.$]l3oXkuAjŒ)]RԜEE8 aL,)/S `=;@ @1 hbҢVg][s+SrR ,/ʓy_Nk W4egk6Q)Тv*h`3JҊ ,ALgzPFM vYbFaPcEAL RA?B PH]!N0la4 fFZJdxB鍆Hjp9-  Ma) YPB Ӊ蒐QZ3, ta,+M,\1F)8Yx P*S2 q/8"(t濦nHp l R( MX,!VBIթ V^ac zl FTR=JmX똫kfJ"uen(.f`tOEQb0]˿_h i Yuxj^=$Uo;y1WJt |0_ غ8%$/aNB%BCCsһƹ$ڈN \Y3\B $:P x,Aw k;dXh|j_VW8L"ɔw.&!OΕ`"')"5Fyn!҇CyM"UVAUe1C zFCXr) ?ZeDІ~t4[ )FjuRAtT2@ѲZ:l2AU6h kaoV |5([R(ƛJJAUk6@xRSM,%t² R {EA ~4 as|Ȓa84lQ}FYt1Q@ E˥։Olc0;j @&HCX/.E&, Tz50t+BT& Dqx*v8@5?,zՃпM,nN/7[k4Y&20Cdy/?o˂祉YhKG`Sfs$#h%˫Y8w=c{2AZܮy5ж,70zmE¯}K يVH?*ߪ gDa y6++(7"\S'\\[$\}RڦcͥWHwFq~BL/ɧhv]1P)& =`m.>﵏R;kS=y;Oj_iNyd4Hwףjr8:֟z;jAC <mt=BʣũR_*bxK Mb~s?WO)!w=5ytp_P^ˮ" g A uoLOדŋj6٩nȍ|SZ#~tyK+g^Å՝)sT6[GjqBBN'[y] ;i&m񏵼z|my,m@|cD+p OûvVOHmGj;Rۑڎv#HmGj;Rۑڎv#HmGj;Rۑڎv#HmGj;Rۑڎv#HmGj;Rۑڎv#HmGj;Rۑڎv#G89QwR|uǔ>j{?ZJ=Q+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBpuT•Ԑ<'•2χprlW`'O+p}DjغIlO_DsY jKdD3FTr)'yx/"@} /lPbs}m4وLAmH1,{x`bQsz4Յ(`sC3-Kg>gC[x>9wnA!n-QNLzR;2g75@_̓۫`RqVᒛӏ~raoiWsZAU&Ҙ GJNmGp[QTҋT]rڰ(Ry82<%U.Ӑߞmv1 >XUw OTW-@@A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A @A Jhjf0ls૩;vy`FUd*H,p+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBp+$\! WHBQKNx`7zw.a[; zCW8zωG3׳J˂Îj^p8M8n>.-?ݠpV1"P"y_V2!T Ų[[MC^E&WIF>~}ۻ)ضOeK=XV !Ms,b}; %LűE[[jX9mMnE)ߺʷC4^DK~.'Jzqv%%9W-|mY}F8iz{W zQ>ObGCWc|Ot) $kKd`5)"quЊyGWxcǂ$T+y(SzƓe〈VN1`  \9{7-X e3s=ZS@`9}^HM3Ӌϻed=TWhpjƦl6+^N4ozzqٷ om.mN>_Vȅ@hwӵv̀co0 p7wݻ9?xv;Ad9Zov7\ߣ;wt6V~{<NbMS⡡zխɡ7 |'_!?_Fofn?Yn.YWqw0'];W{^~Xtn:2U7>}\)f|,f*r%]K\q^=Տb+X,Q*Gޥ̨'0hOxxin)߱{f*$V{~Gzj_tW]otsՖ_Q,@ ϜHAqFI^eWexJ*|@Y*YV,DFHPAczYS|\Wv=[I4JH4q>R|IKi1H7$7p>̶o@|O޽^o@-& z+*)n6\ܚga^mes/+:&ͧnÔ6ͷ-c\Y ] S^ږ&IdɎؠILDΛm|v+Ny@د_Iz*2Ԫ<4uCx;WFT=ir^ϵ8'gU%KsYr!-?43gu4=--PZ2&CE+kQnҺw9Mʖy*M0S;^)' ͌9]\IFc}i - :H3sA&hWMB ݼrЛ/9,%6Ki_2__@m2@[GCVg< ʓvu34gAK0ﭷ>;X[!_iوe77'7"tLI\hbHsZ*409Z]2y <:]Ϝ ^8М YWׂ;V .`CM><#(}} Ta7zDD|:ǼY;:Wv$߱Oc40ZL;u*}Ȩ/R[QxUl4۶qէ7o.,ˇ`;(wg #a%VqDi"E\٦8>6rep1}\9syMZBZ %Q~kܓseT|1LE8,ɴjN[1YW#ZD_[͡elsS P59ok߁_, Ѫ -xɨg"2 nb`X]Rp,)U?Zybs; dTAAEq)>n[t4z-{GxG4zY6x6F%e\8 X&N{E,̥pK5+wsE8OewQ,V$5ärEFC4pu)VjG\ʦ`49RO' ұ-)gkb^gTOV8k FzϮ~òج )E`ZByHF,WI 9\4ہ!WjOh\ ~l'>tYOW0 .r:畖GG&ݿ~On^_;F2cҴsX$W&?mwz/%/^t/ޮ~?ռ/QܬKK`S)ٻHn$+vMd`45 xiݬo0ҥ.Rg T͊b2q9G{%\|m?~Iۗ)5۝\?nLUSyH9r'{O%Y9K`%u:W<pwe\~"Srط2Okϭ/S҉ ֟^.8E<8;Ӄ|d{3o2;|, 膋0W~5;B_=M-C!Jh1ܜiffr{j/JVNB^c]8P)}'|NN˜Sͮ+턋dr =E%rpn+Lj)g}H"t#k~)\ߗz~ ?|KPpֺ&عe:C:fx&,M5qI=̮?Y~>3o7YO<(GY}s!wO5AZ:=NOrzY-"h3]n]`n.]޻nz~0*|RJ}$arbxU/=iIK*w.EJ༎ɒQGRX%M]CtVYQ)s!ЇgLzm=%Dց|A0JA ֕b}2M"i>e Bb,p“7y]5'X,⣔6Hٺ[A!'Al8/:Ih\V>ۯ*$߹e8%JUWjκ9 睺ΐdqQd*th |lL>6rQG_Thd[up6RHF"qvή4-]iҎٕd-N1ߞ]iV*9.veveؕ~ƯϮ|NSXITc`ΈsC | j$-H8hޚhތh"CKhTOZ5*\ED$Wrtդ wl$oGDl@{k2z]-o=nݜ1./g)L0٠/ңWdS2Q2X^i#hxV+AJ N 2 hdg@HPWѕZS _umjTl&|GۄI#_9G_OO{iLB !If*/;Wݑ3ZX hֿީ$NV3W\>/pF-沇$9X/M9 f^ OeI'\{JilIN< P* )ωJAos2ҀR/92Rv ύ8p, _]@`A~}S7`VWN/MNlΟ9'or&nl9 ,QYQ4J&|HE'fkTzIX>H a,2AbdQH!HM6B6)3c+o5Ok NAmևx/6]do Nk*7׬V*nb>qX=>lCX%/'}Z b'}Z>T%zy 5Ur)huTatDUR-?+2gs35լIW:[:f74=oz2[#ך6;fKw+n[tmj_K9޽yɆF_\9= {oyͮ-޼X٢;׏Jy1 ;9k|C3f[.W'-oVf*r?{[NIxR⎖&#LtIE7]lz Bq agg;.ыz=X)}}kX`3mY2U,X<*􌰥`<`@I&LP**`tV),1ɇӿZz!x gq1 |VY_Px."?G!F/ Jk8NXi⦹L{&}z)ʺr]<:mFBd(Cq:MFiq X3myJXJI̐`gpR#".zFWX\{%Dʃ&t?=82d6hJ` )eUF' zcA"ġ3g mcBP4:䕵.vK-/Z zsW|Ězi;ꤸhhP0J&2R1I^*.`b7(B^=R_tƂEk?$b%,K/d!l- 1vj,+ρJ,.t!ئV/'=z9V- l0a рURRg{# 2h0a0!&>gH;!cxS٤wND¡APlyiMG{osq|iu]2*@VY}%/jLdE% <e h-6OKc̈́3{í.5YYyZ$K=r޵6n,B, T߼`AN69;yڂee{&%Y%YV([X2lU]_UWWF+JJD(1@>{({< alP^$2NXu'YHP!Fp^T`hb,0!JYFRT:VĎ_TNa< $YhCE`2Me:I6o8r"  +H,Q4oJxA2|{DJ31'8yA(;.RScsS k_N@||VVe/WX?ǢTVaI?xy3-bz2j\Ueta+~C&,0)Vճ(UCqk̼~Ҩ&?הϏ7|9==?5,zTCRaΊ?͚=,L{t7Ah:$ϊaΣ/qRjXLqq.Kgen')}J3:e䖞̯N6p8f7xE<~ˋY 秴V(!jC8'pʉAhS(5+9g*^}.Ł)fZh'䍣֣l$F"lD鸞\e|D5V<՜G}d,Ik[룺E,g-VQ7jVS)ԁ{eQ.*qLy' /PY:Z'U'+/ bL+KEb6r+ܵ.`tY입Kq<qi*xD/57މijAD*A9}KQIe")8:\ ᆤy@qGFSA"d6X!B7ZjNbQF(b# /ڈN՗H9w%X#KcCBS ە\Yk#?V]nsRwiZF_uyƟ; r=cQ:EWC kM2!&"7  ABdvsKrẹ\V_X< sEv8,&.+;='zE|M̂xwr8n@4unlJ> OlM y1M֚\QM͍ʶCZ`Hk饜=Zގõbq.5b]RR~FlP$nx]oJ \čQfs"rCcoDPkbLJ|%S$vwGRKHk hf Jq&Τ&Qa歑C B:LG 9䇷e~1N;kd1WJ4-q{ꥫUOyz%/@@:^AɈ$RQC B"SvL+_[>MԷ!b'Vɠ%If .7kmg߼} E# VjVgj>=H!#+#G'N[XJTL*A )R;!@U]u[ދw[Jky 4!'׌1N11!Es%]nlm:WV(2VPTQFxf,J2 P李L J:[I'emNOt[Bh x?mɉlt:Q:+P˖5r+ iW : Qb)`ʧ(I4J/ \EaH$@C8ﶸw{@c9 VX*Ȩ)eȐ$5h׉swcJ4`$\mA-C26L:C}:"}DN 7 JP\Xd%pJ^kfE4 ⥾]o7ϻ'ϟ+o,|* +A*7Qq&͡Y.p9^ϔRDeJĔN8 /̩;LMm{Xo~_;p'+HTȝ aꔨăCjg1}k.Kj?| dcAza[Dy;h"xCQC䪅#2uQJ];|!}a]4O9^Z,{R\Nsi?-L1cJ*SۻH[#pY蘶6>/+'6昖.Ł W1^nLVz%Yc@J}Fr6I(܆g "T"qJ!pt@E@!6!1o(ۓn(SKYŸr0:?b-[cg}Z@>>E(.ʿfpî|dOZ7DzĀ9=0"ALiZe80(3Y`j Umm"K-Wpu^ w>۞vD/ɔik7k[AtwnoU`Nn:9J9X.gMV𝀬 ^!m8_V۰VE [lfkExe2urrYN2Hf'``X֝nWN(^X(f8x2Qbһj6NVO,6l ĆyAdKڶ9..DYet1r+.n['Ӏ--3$M۔78Ձi:X\BT,~wl6stTi,EhiG> R5l~ˍfxzúgNPiAYg"@'}-'hO+AQUp8mq҂oqk]" e$sShZ)2e Ψ$"nz[(sOhl7J-No3༿~zŤƍ+BTQujJ 9oh38 D\fL-c ]I[ՒLrɆ-f"2nt,GcifpL9hiֶH-4u܁4d"\Z-'DcU8^QVmgO hwOW9#AZe2 bԚq8qRіm1ֺZ'EQ(p!j&7?UUQG׳+\XP…eٿaTT |fBׇȔ|P|8_/ʛG_iҨ좜z?R!i{e릎gG^jx;=V<|Żd㼺>~|GVڱ֡iJ2C(gu2T1;WQ_ָ0htIU,x׫{{ NFH:#Uhp4,fPԲ_XM5Wͷ7_&5}-z 9+j@M@Κ=QEټGw1϶p*WezG5NnJ&!wMz"DDVUr^5A|ՋSN>0(u -O;YORB f}e|KMzDgޥ]d>T'Om*T0#5OyxOY 9<9*(#˙7ҳB$2birqB;g›/p((gS&,/Ҍc GbSTEǜf*/ꄇhU6M"PeO qQ\kUe,jʒQ78'Pd?C@9x˦CDɽsyU]KqQot,#uqm=u`ٕWB;9/8Q;++"@*պ tQIET.H_:Ѳ~DˉHµ%} dpf-GC T^h+ CFpLUJ:29;$ YM{DUiFpEzfYPɾxpw^Ee]~R9vh:҇CG\f1UttejHZ9l ?mԡ *  @(mH8Z1H'xyJfvt)IKȑ P%G5P #ҩht[B́@>Dx`IQ_gb=׶ |[s4ygDf3n\Lÿyket8AcE{T I`@dơ牧֠蒷]1gj{ȑ_iY"Y 0vnM0k#{,;+lEXбtG]MS,VyL:Xx#wFdZ8IUM*λE|Orw㳺)D:l߿pbBO,{ @Qj7|>Z@zoo~Ý^#9||B^2G8Kl-g1T}dN@08_ުAXј)${@oy=ǷEd?*nO.f7>߮ 6x`cm6;ekctm\)kfő#ƜB r 1x7~FW;GYWrsͷQ_aX1c5F ){lQ\#7B!!VO7OVTLeڻ\N E 3&Gyȹ_D˛qy^Lh9mv5Y'wVk3p[ @`(!VM>f dSln[oO7r(j+ly0ӻfZ zi2/>Ro^ufYd݉'W VhlI?}Z/>FcsjTz3i1jc,QawWS-KLG<`@&_}H[[};M4# T@E(&C[hkruS\D }m8z]ʻaa?iV7"g\Fr>)F41" H. QQҺ5 8xXCeDNvܾV?mw@TZknA")X,9hU1@/+ZmIlF ɶw%e%[X|lZ;]h(R'N3F[E.ݑľV|ZlMҽxp_5KC d`Z6Dji >R16b䜀yn;\6$xFU!ڵJǖR[!BSR5FkOXd :;gFA`5Pk#S4䔮WhSzW.B ;Y7rQ ld7w1ޗ* 'EHB*5 J2pk#2V05bIz/zk-0jf<k[glat Ѹj:iJgGNplt9ed%gc4Y'`!D"%'U\2PqȜY̨xFGűWex/\=m̴wYS WgKL)]M>z#őjJJ+N/""y0DMn,w겅xdX.(8G4G_`1GTMģxۍCvY:ܷ=_4Úr02\U QFG%\%$L&Olb=j_%X.WvJ@CE1 7E떱o_=J@1S&R1HRLbTS|"b2f4hA.7_٘а6A(VF6މ!=U^i/rL_iӯ.YBlkW bt(kRȹ \B0lkQ赃Z4׬!@QȜbC::_7UJhs҂ƵsF-UwJ: TٓI0DJ#J$2;N *#Ce^8[$ׄu!UѬ-r4.B19U:xV>rNX`OyKū& A&kb'L*-{[C*Gd$H}i:ҥ8ߠ#A6ASH̞D&k).ِPGZeϾܜW9ϵzY;@,- ?-s8_Z?٢aqfނ<&t󲄖>]xY^~4ry8ۿ @PyX@\&΍wny\%VZ&lJ!>_^ej}=CFգ%Kzr<ƽCgc|=X7ɮ6+ã 'zv>oZM9+_ZNh_e-$W2g_V1J_Y"<:o-6ƇD-n ])nmGݖGg&`4(C"m#^ V^ l=A*D|,"`z>]\-}+[UGIx 53.)vrBf0zɯ6e$4r׫q2a6MlNh3FfuxNQnYRtY?gpS /ˇk%gY go[95)9Ǿ,>\,dNo o[aۣVjҹ{.eMLSH̦eY~h5N=/GK+R߸_ZI#*Mq'@ucT)Zl9<_]m{ʰTCR!C&9+K޳e*XD w/X4hq)/`wUa7JR w?qMO*֋!V!H=ڭ\ϥvk}*Tߧڭ^ |ڭW6$7siqpf o{GFKW5 b(x;렕 "rS[ٶwjcf(ػ\!Xb|>%;בֿu$w]FikڭIdY IQ/_b,Klg̉Re`'.Dn9 w]D6PJʻ9?-O%Ѥ2kq\2y͟(JEK-(߽K6Mrn97?8EAf~V79pkIoGGn_kr9o-vcCkmz{Z&O;8[B+CD J;yWɓ9ΖKFثH.7|k32DS+K:yBME^C-:[v#QEr&Wjuk>;ɠ&P87 F@?FsщW(oń;ОTTb~igRF:ʶ0}V%-%ev$j^ IĖPMuu0g g3 oIvO3f&l(F~=2脁7׼kv7OoPkicrƜFl?.??j-qcO?|B;SI("U2Xј5 ORXYV [ XIT#.?߫?Me bq||JhcK-:_-O?%m?/WjV2 ߽A_[+!}UBȳ?VŲK{Ko;j ޟ_DCI88joXdh"L٩ ?Rv^zzjǽ^5\ k-*] yH>&c[9d`v4Qzʩ(7DF4!%jLty/VQ4D&i gTuN~Na7s? =\Ե][W'*XvmKv $!9tIߩ/']egwx-CoH՗d[սf (1_ɵ Kw"/F $F3ƒsEf 6kta!!Y, (7_b)D(xL&) CdNrJr4 M^]vcW77w3@ܦ;P`Π<}u㇦K_q/C;c:(׵D6t h!4Ee 8Dz=d Z {ޓ{OyL{3io*T iY~sXzvBށL= /zwz~y]m1?|x?u`%=FgE zjDD vQ*fUvƪl<2$AH#):;̨0ן"H&e Y9%uBpC+UvHX2/8Ӷ$֌=[D݇Ǯ^u[sq%8>2}%rGP&ϡvWcs9X{醚1AZZsOmerSU)K*;є|u}qRBYe)H N_t{jaTq܌ggU_غ >64/1c5?$0oKd;lr/Y,ꂹ/TH'Ȓ0QEgo;` doŰy\d/m8?et!=*ڭ~8/nK'UY %nV  ~CgN3cY2c;,~ڂXmzV/fϺB[OwpIG}';Ҷ]qӋF?^Xpd\V!aۍGaY&&O=H [[Oo1G˫/ =3 ܥ\lڪ&5&V6i;.Xs5fs1YTRqmYw`ygNe];хu2X_Z_'ԡt-0Pތ5vގ'F.M C$C1 ĠP Z~F @d[<gΠ=-+wgĄX؛7}aL6vb]U~5zVOB!z8tAd>ר{6a:/\ӎ$ , 6; ]E}|ؗwQUoRD `Z_N":0%C (6SKtoɪSdjHlgc%P f (ާQuq 9,Y6$(Q {v&}+ |+N[wk=8]rt`vNccբt K+;c0w#:QRY!4h'e4^?۷B@N}y c)%iv3)`RȞ'T¨LXJVH'W:.PwE9&+~ 5BpJYIiƀX_w]<5Oh(=_SB{TU/PU* FiDSd*W,MVwXEzm<!w#x)%e %2}2"J79Tɚ\-ҪUҋAQ6(c*l`b *Ƅ ;gugK` QO΋rE \<qITb2A<͊)(9@K|y+XX-PqL.uVA6Fo7U8ASԊOx&&X}J^%O:"$IQ Z"䔈([ZN^w&ǿ2Q,*BRe 3kr0i.8EYJB 4 h."}M=Q= :]g:23o x$(Sl`9x\1N($^x|s)xt1巾3 ctU49'MɊ ϵD͕[&$ qɍI+:O6Y«׎PJ, ɠX6UMMC1@2tP"G!٘5`J1E.F# 2h0. $aBMJ[+}R)-?8@O{٤ȸND®Ү je tzvfZ]߰yVDٮP&>7}IVdJd(:pʒ$$ʤ=Z<9V|;/4q|3E$!Ȼ&E޸VźK=r#N#"{BbԊcO"{.[b=˓SZo3$/Be:О ?r?N`Y Ofv'4qifg TuׇE0x{٫o(q ˗䯣G`(^4y.]^A`m{ןF~F[{nhr6_$ $a:'~9{<64 *Цa.]N۸W㨍!G+>]WW 74%qt3F5rGӯ!\U{/j.}UR֍NS_Sӵ!^f78Dԭ>Q7&dP~CD+cO~|(u z O$OYc{BI!AKqrf xg9̮Y~>c,[$#G^JtAxY8fSݵܯl[]'Ռ'~Y}6GmVX..>| -Z-Pu_wOӑ#pr?/a%v Tȱ5?Aɘ8-_2.f7l4Ʃb;'T蛊2lOW?x6܁1 bm\ge-wl6}=.=ڥwnfcSdE.-x^#0Vb AĘ2liQ"jP.Ho xfwOXLXCFs\}׋'c1{јG!sp Rֵ>[v49ItŽ! "cp:;l-t{@ĈQ==ʎOl/D$;pQ|5(d2 Tdt!*,B>IN1& 48L1 m @$\Ai /4:*e?ٻ6rWRn w4ڧS''*CMըn>>H6{ejOѴG! u(B3%}ivq}uIi~{&i?gt0.>[[nK)T|ڮ|mɘO;˧_ri`K(= _m`,```+ؾ`V)xZ 9rsK3b`U)'=g9Ⱦ*,}C6eHчREf)9̉` IeX]Mms|ivl;z輐[`˧f7:ҭ)1Eحg> _,[^»w=u}eK](t?awֲn '$`۵TK[ji[os4mel6f!lewix|x{ dC+-7χxͤw˜n\wAGt<^ιW-i7cN/ñ&3em^3.?x:Ī6$I{Dch*qT*ҮluRht݅Je?5mY*k!h&.y;'~b)"YD0y\(7k+ )K A G 4X ]YtV%$X5%\N$('q` ).bܙr&X\QP)M31N `LyCJ$Aj˲o43v8Qh$q/=gq>O:;+ӭI𱽚|ћCPXd15B3-? z% 틲i^-A&cU:+̒CHLI%L@9$F*x{vr]:#wk]^[NX.m]}:?%Hs`dcAMZ߫¬زAoZ3,0U7ϓֱ[@~Yt#uCxBޡCP.8dC:fy9#~ڨ&ʏRCfN$6LOd?5Oxs!x7s88~''юƣlB/N>X\*y|sf'OfnaFϻx@hAq:lGs˜\/ 6ToZU'ȸ/b+vr3*OttF(/\-c.Apd =r̚`pO n[Wf.@4:HJ.P7"|5:A3M28_ Kvs/uu޿IKՏcB'ۻ a=24Swj/S7$H_[˿/l/L?^ϺYS{sS}}ٷ ݝQgHge*qsznOhx8R\(Vs;t!LOm2,:D fѺ_C*z5{}(=+/Y"yJ@ n΃bpG\aJ([}lpTMqM+N>+i9X!\/ 6h`Is`BJ)V5-JoK.EZXDd^[CsAlBX@g'0>8U%D12* ҄ MN3$3]dBq"CtO:z?MyQ^B U6l# y{(׭?7Z AB[f5j(J5*[ZnoQZF+8h^h^h9K Cs$.KrOZ5)+eiiˈU{+mx`qv yH#e[>m8T?KW&Z?-ݒWg]N7JU݃'5j50-WA)lɸc2Q>9K eJ-8(nL,:kL"HgS2EKbLȒ<97~j,)O4[RwMcj{NH 1$\.@33V(J LB2bd{' 9z dhc4qu&9387yQ*4}w9 [+FzR&^K$d"nbkyO o.k$š R kfVA Yj=8o$3jS&v{9KNbb͏ZDS""q#FeE`\SD` McN#Z(r\t3VoZIN_oLV\LH!B[#+r 椩&v`(WbfKl`krq[{P-5u;5t6i}۳YWOlճ)ݳXBB_dh%B[.@M.з>>ݻbG'l/Ny􌧨y@;FeX*,PzU@Z ϹK 4!u.$&xӢH}h-qvB>(yakvyDe+s^|.,㧲.bM_C )UoF̜e?{Ƒ"vU @>pY#觬5M*$HQTSZc[E6{jz~U] ZAN.K!B;n}Z|B%.5GhmBy)0z1_5񇝒ؾC2 \QLFde!=f?&y`d ‡2AY>/rfY%sG{n wkixE~t7^~KH7' ڞ+ړQ}몯P W}oP!CqFt;SԂl} zYR,BGŌ} G]kAxwZE˛hAR9G $^FW,d sS")ucՃ*Y*SI*{" Y_2F& -KjfV#gKC9TB*2CV~aHzlu>|*7$FB4,2\$nI\E ?cΡKx$-clQOrV嬕/߀:ƨvWzֱ@|FBi8ǔ$X#dNorP62/dyh!)Uڢ2Z8zr y:˨PAp3Nz&/)j**{I}r%.ʀ$%Aq7ڔ*%ghFCj(1M7]I{%u{_ԵU4EM[(0#&J>QUJ.,C%r/m<"D$яtg*&Uc}OUE,T֞VAN%%%32X︡}n'ZCcXC]Y:[tNb n94f[@Ć/ㄅ U+dQr5|J(\47yP΅bR>e"G#}!IZ{'Bpl$CFHƀ wٺ}ؙRkRBiwqƓ%TLBZMT q$uԽn8:Ƣ h2H"#AH(Э +-CȒ҆ʼ7p 뚰TP݅L;+ZKSB4KYGa֡p&X5Idi78!q2~_K5t. ~Ȳ X.@)\6ȸ0%z*ƽq%4FTW!$ DtZ PcJIϼQ GJK6^RSlnKg%ڬ;clS~7ǴqFRq!}?anWwmiO;|}3EwJOg (|h(u KMT/Bu{n6gYT/ `8,_sgc~ƒ_[WI.A(ئ*-3>xEv\oq!$np;*e-2 Ɵ$$=Kۼ ʠM]H6t^rpd/?5=nhqcx{f¯M.xP~V{ȉF% ϸ!gR+M~>ސ.xiKizYޥ҈JIi_?Ӻ ջe"-k,ykvD,?,~[BOR4VoVkj 98JE-xX!<[&"Le7dr'E:VfUϗכ?i~Yuj,1ftlTv?2jtmŤb,\ ĥǥIHT/&19<0%Ϛ YV¨r" n`2¢ۡ,.&E'^(<-&=vOrO@GGTa5FB |Xă.XE}qB%dC%%^-U/^V+ϛ?5&+2k7jOk]&LA&l_Ȑ]wY.B>]ݼ'SbsvߣԻ;bEE#0YjUZs2Be6|([3b@& Ȭa 1ryp}V0{O >O + i➡7ONg }gDSh̭UuY荛}ԯ'ܱ6{n#vVx֑(C9n_l,hn S \8=z |Y<U)Du 6Zd2eq^˂!1ɝe |ןȉbV3r9DʬwJ1Ae/Uw޺$rݴG};y$ *@O7k(vͼy͑ mu&XN ~!YboT15뭘 kjh$d0$Z mT6.rK#@,Fx]Rr ]RdYJ6#W6D sk YxKh9OcDڡK hL7w| bbJ&OQ!تyD"3P+t{*=`#!b[??ik. l+gzEQM(S^Izk@0+ءa52 U`s&K4h8>KxbklES􆓜3iɮ5Sr))a.c] 9#&.2!Gw^p5rv+]aϛ6lCfTs:bBx<]< 'k0Ro_&-aLk,6\>1\ >sciDzsQqW"6M(h]۶O]f`gD}\Lҍ>F742>=-hĤ1Ojߺ4͜[wv~5'+;̼2r;?L--ݞyϻ<.OLݣg/uz[@]M7߄f}k;܉ݡxW-ޕx$WR e~,o]h'? ,Mƕ*ƲlQth0Gl,X8g980hS߲$*ƮW?Y'-ob,dD:9#{:X 1xd,XP9A\ AY=Pb@!*` 2ce J8HD3ef+[~~s cvF׻Z%a։kA0.IJs1KdN8)&/GR>] `TjtCb 7@Zt0-fJTe.+2'4&q f/%P PxLo~Sf:XN,tIcM Ct!)F,-L.1A˸`ѐco)85}n3YFt?g ,@fyY{+|V4|o2 9[V]]n6nH& G}\tj1Рc΋eU`mNʡ}*ժ*GP 3L+MePp23< Ŭ@{}*E0B]f$xK[og^%~J7dzE"W=9ќK+O)O+y:NBR/u|rTI;Ny I]RߍV2ۖ5N[bRFIp7r{ WG^(^Y(CL|iK諾jJ"Y*6lO ;m=|S.ץ] pbg*˱cGW|;]0n.b4(?avCd4&&Kt&*E=kJ/x6ԅJ+ʺQk_0xIt]( 'J 7 ݮ5*}022ɼ G ֲ:pBۖ[e F_5`ϑ-0!dTsoUstkKP]Ӎ&X %;>3HA=I7*uP䨴j`N1^&rңuZ'Nㄺ@֕{&:NӉ :P(- Bւ<`V$YP|ii%XԴp؀g!BTjկX=b62qYd:a~w2X .u02="4l{7M\}!4$ vаIi.CАAѝ5?>YH4 #uJvɛ'Cz(ͮQ]cu#F oX+7E2B P Z)-9zo$c,έv?&oՑ\-׍Ǯi}xs &E8q6i0U9Rdr^(˅c-%@2Jn'[Evi7`VRTKﵶə JU #khlk]c}7=>mVo}gg)pb =P W^c-ߧ!.4@"a4[ )302*š 57gEpS|>եQkc 6G3AggCX&!d6O~v~t~J.hLj2'χݹx4[{ɭDZÕ*jj!v@hx%K@(Z4MK-Pd9w526r Qё9)AՀ)Y+GCT+_gPFَ±f]GB3?pi5|-˃Cq{"]l.]?p|<w|8\k9c F$(T%5Y%1B.0x1Y jpJ}C CBĨniל}P Y,&WS*yXlG8+宠h㮨mGFm;dn^)F]|eB`ĕKU{J!z& ʨdqnW|HQ!e(^ 䢒ESrb8 F!jBT|{|4qԯ"0 "6?ndDt"NiƧhW)-i)h$"WC ѹ+D-PWTƍv2RΣŢ=p&+)ћ1c& _yQpqߥXh䮸GE?ℋf<:QgPH69q C1b5$NRFT6bp>pqWpqW<a[, kh^lv>"Lܞ7>L(BGO Y^-9M FHLXMzQXzA 2huS=%@O;wD%!NRPX.)`ŹPF܊qdx!YO r 3Pl@ NkU8f h@UVeS6&M\ޥʣ;FONWVٕzxs`nVe]hZ}m2/aư^fcfsՈՇW=kG=M`O{RvIi`y=byK%zq5ߏ-IhTz]Q,Ӟ]=u. [hw[t'YE$&wr1P>+R0J>oJUFEJX)z/Ps@PIyutCv!eP5B,$ΨD%kc,q>qﲧcBx*Vۘ E^LRU:n_ƎfOG[{] 3']j9aod8:-mQ=y-Ywbuf՗~-ZӀYX~XZ߻<ǥv>ʓvdݕ+>'zyݓ?i_/vxQ"Ӑʳ@׿}Cx !R@wܒ𚒥 /šnu{y,:lّ| )KĨE7Fux] WY.ZC\ 4P=xZt;?v6/n.9Z qOv;~<)aq)o򾛧> `4Wa0 W}~Ew%p8/-;bL-x$ּd%R NK[=E9b%c/~jˊSq%DY0ȿjD^pLoMRq2!3@wCuy/䰟6_u!iuҍ }%Dؕd @>hRMIț*Os?nqr-^9fM^NVd[y}ѫ^]\sѹ8R?ej#yob4q:D4\9:xMC-~@vcPMP٠`%Cğ~Z5esT:[Ss(oځ֍ҭBp`yѹb8؄=9]B2ZsTnX)9]5fܖ'瀳KkJ2\HqZ=K -lH6h񅌦-Q-*_+]Zͭf,YRQz&_=3&_X"MPYCdUvK;ծ4\ 8+mrD+Vqʺ Y *'UTlCSDgR풎x+T;B6nke:lFF{0fFL=t6pKqy[Jk&R:ީnRtiu6cv!Rփ/p-;!ヘ@1el5djfזּB>>:=o2mq̎r;{05@UCԽwtIE(㚢֮\mtꢯ^s3 c*Y E;TArn @9dU [Ȏ])`Cxp !d^8_F-[/5q=vo=^z|II4a|[tʍ%$ŝoGINڳch42~?|L&H =1¢*F<)8/(jT!NvFMv %mR P%T=Ԯf-@Y(xhܰ,a%f;/(i qq2h:osiVxX%b(X{W9~[߽z(.Р\& IimQ[jLF&HF69(I)m._L &`x`ȹu,U\e, c.>)bȬ|ZDQ bTKU;ZLF;]bD4Nh܀w[ kc5h2]Q^mv62MK$Glx53{ۯZY.qQ>&8/DѸx}hEbE<*ѩ'ygd} y5ᬛ ID%"ÊEߴ*mF^̸ &(d'yq/$Ltp>0rYb^4C80Y] ƗéDm[UFi&Owf~P֏M5:{Sc áCۄQ4gw ('񿅐#KcxS>I7޵q$e`H./! vs6FЏj0E*JvQ>D %ڢmHfOj]{徂}"pLw[_]@wR8MAoX=.E+$. N͚>o7MQrjiT`oM[kj~;6i3X4qmm;ݒ:Rr:rXL|뷐9!RAJ}EL*a {?p_O'6b~}/6;u s[F~ƐY& ,cINǜ΂\$T09Y^tA\W>_?yju67nդcV#]Hw/윌_LWٙUc>]s߱ʦ~_hCv nJ7ӭGe:*alXSgU2Bu2[#%;o%۟[*J[B x2rR:!d@B  GRW^\mTƼzO.jckNK9: %2(.bp9 9YztpG;TEdqLt?c̘2 KE-IL ҙ8U}-ڕxVF3Hc Ȑh2GT =L1<eʲUً̡y("pł&meYdoAǹ JqdŁ1I $(?w{L}M}>SݔMܔڧ)j%%ol=sp̽cΒ?b3 Ot("\*z<:t1: ,`sk>ezJk6Gg%a86w:y8,/EvE {w}_'ëzQd.wg=.Fn gek-2?ЅYG[YݿSmDwNuNjMԚT hˤ6ZJ(1;TͰ3:EOovjZPv:(PON3*=\Ƞ:Q,;pS-X_.+} <ˣ!Q꘼8fS)偣 PbIsZeƍʜ<X)zD lJ ȑcY2+Lw͎mun;< >wޮk_kN7 h"7;CP84!ł^C}6# 22 WKVxg5Dɠ6@={i%qᖲlB8>;s-^^'%Rk]3ͪJ3!Ss—JϩjzL,_})ꜸJ=-[&Y!V("8; ow7Tӥɻ7̏Eѩpt4޹ݟ~.q#|N==IZ`r6JVr^22ĬeJ&g 0fyvidWEfDx BPc,'D摆30EDf,&;6;gôl9:֡:$te"z113RdLcI~3hK7|cr|Cw&YU^|j|A@ˢPձ:XVu|4U糓~S5%lLj;?UTa+^xT Z蛣;}};mWHrtT}ؿhήaPO쪞|INI3N:U:g4`2/7]G9߭%U}N#uCxL! ? d`!\mɥjv:]bSBF=:T!v"YS 3=aT5e(N- &tp<$i bk GdtD1_V+M4gJΛnoa?똙0Lwہj A1Z9kvGs̘\{ ~0;ÊJ5E^u[+?>gǧ:P }BK>;>kB}Ђf?cw<,O4z+58HJ&P;"|iz |պ N>5q}D|}7ߴ6۷i]ѓq9OIvgPBq)n$v$=I2T!U/o-~:/U;UbΚ2ݫg=}VN{4f)twGeoL_p<.3Pq?)5JrNk'!/a6Fr@L?\̗rT:XbP>*ң\|.9uxEFSe5%JA|V!s!L/" 6hǒ6#R-$"iѢ/P:Q toZx8Xȼ.D[ "8aKA` f3L>ĭ*~I^k+u ]+.2x!SPX7Ѹi)~Dz3D,ӡ(1퓎y,}u8lXU"VF:=P܃%3,C :O *l!Z#רT;W+t:h%ݧd_.+ύܹ"nsfabcNVDJ( Mf9<Z\~*lz>d]Чѥ55lgp5X2(64_sYw9&wO>K;z!vVt|x6Կ>vfe*:ݝ9]i0`(hsMb67=$bs"Ѫw[ )""XeI`g_bBq lD%]T*fmH$L&sFx%;%f"sg3'7 >}7rzS!ˮgf3z1bqf=7k>z Cb2V\ԙ+Hb&!2"Y!G%AnOu.[! pn<(wK\eZ!t4ғRȺ֙85AOz,p8\k_~b{v/|jm#t$K P8E'@ӯRlٻFrWݶIV,N0@% `WodkA{˶,rۖ=3Rb=OU =5+# R҃sO̶ˮC0 S%)]AW|;g7(i%{+ b*鮔uun+,ݬ^PU3ڠr]OgVt D #hQuN!ۨM!&+ː69MidQ uɀPYJn.D%S'Nb,8L1 /]ۂICMw) DA:1_+Mi{ݔW $\U#Z36l>5QC]X_Vq >FLآ];]y)L{.eYak|%TmdI)o% cK x:+Sq 0KH3փUdR 9X9B;G /m>g"A|l(!Cp1KUDJ %I_wƆ녝 /{Ζr97+ާmٍYA1 o?tA/'qj+Ԟ 6]GSyKx[>zt;;z|;;] G|{n7F<-F^9r<ԍ5tkF']KWx/Z qjC\ ŷš%Z~܈3,Z+餤JVZHϴXqB cMBdOZ'!T(L wг6-ӫ?5urIP}i'"y*oKx⒴9j9(ǁG0%S>}+?_X[D2\$SE٘ڛTxY'5(@ۢu*> j.C ZNl>%D'$IQ=Wk(#GP G/ ^z.[B6fY~hzc-ٟR[S򷦇xq-N^zzf/lMΠ|Vuw.B;HbNY}{BI!ASqpd x|F_lKOȵɈ]>2> gBd)utW~Z񷹼V#AI~:oSw;wVۍO*G͵h^MGG̿|phU]Ff 8PȱoWdė=OY/j ҳcc*$t0v<0QeEM5јF?է}ȨSL^U{RxFM(^9.ƦE!L`#0Vb AĘ<0Q%[QRPiOMm3j("[*ZXF(}0bwdda.TB2vϐT g0{Uao2Mi.zY*+R%'U9ZYd2I/uOWy)lu_q+к %>Nfe@xx}o%;r+-[gnIR-C[Уz۟\*rdDe05EM/Hmrh hJ!4N+$$QFJ &CKӃo3fɸ=:k,/g u=X ,|*3Ҭ+#!ʵFI]tdx%]Q^^t.f}R'*h^(~jz6Zh+ q=\y%p%dYR?嗟߫__45~l.;\>냛ToFF-K}Te],tG<룚a1?7F|{tͺt#Gu[ݍk8FrʺP+tFVO,&v-W{'>bc뽺Wt?oh>uVxtC*.8퓵Ⱦ~d2nDHJ EA![ &xd,ωd ۇDأl8#hت 2 6;kZ$TB_b"1n p\N-$n^lBݡX/n1mgM(d.XSZ̤(`(Dm7I]xޙ4y34ٻJZt4YJ}W~m:wkZOzv"_~bWdb&x,Y\7w3BGm NӶЧ'=Jee)/[+ [rFB CBHP}# P4 T2 Ai+mV Du`u\*(VyE\PltQ$Ks (!Cp1KUDJ %I&`jhoْPn\ K?-O(~9z9!5ejH 1 JCL4!ϵf+>ou֞5u!ZW){e̼ Z+m+YS̑7H0Yp O3\cЫkԐ=ObR#RՒ(h쑬KUWյR0Fΰpt9믊Y{өԌN3m(}z'&7/}:],Zln;Mݿ?\S5M+Ҿ{Ib,ËU<m5lRN*DM5nMPR r<$Rd֔U dѓVZJRl)MN4zEj- a(XYHe! YezDun7(}x؛8hw.V Q &ce#~&R 1;0 YJlX*{>I㢬ba4%#]2E` "1b%"%v<,t6TSͨXKm5H P2  8}YEhs>g 3wEickȌ X5)e6j&>l|;fyXJ}]\HfqDԍ%$ F]&hd4g+'lUB룊BHYIJ$(R* : C.՜4WK*Hu郾|Ĵ{ .MGr2>b2~w2(֓QV߱ggմOkRYաZղjwZZs՞~\2 b]?sFQ^e'ɫ'm6&!iֻJ.eQ *8tU2lne-Y +=y;SpΗ%y‰C7ٴk!tȗ%evC~]f yFB](8g__lzhxgT1IY 8sVOZuNp[ʣuNnjFܡnS7p/N".ήV[0sLhe9Mm6g4ײib?r,?DT5/@D>%`u1A|P#˂PŶ{2j_K޼y*F76|,ڢYfQ|McKKTq($|i=S!FKͦ>0`d|{}*^oh(m0tK1??p}vunϜTFJFEQ, ;%tdyuXO$D-4gNKFSѨQK)IYe&'I)A[V4kWE&7j[ uBcҾ^CC.dNJ#39 C_ 瀴V,COIMrUx;tO(xBu b":EɺtI(#ѻ@X!68[M8xs$-|JS+h>J2*JIcSILF,J=!=L-\O5թIqu<KyZ;lg͆s |;`PJjΣ 4ĨPzc=-/K`rTB3zҊ a`,{>@N6ces R]%Yk^gŁB*ST2NZ]tZ͠VzҠ'`!\NA%>>'$j5fUP0S|2rT 8.kFol FΘ <@i\H2Hm1:h%%sl &,ng}xO&#u39tc[ ! sތ~7u^\WyMښtIIVw%\Wξ\L*x7׬?koXdʓԗ/^5`^?$>/AfZ&oV/9㾩HOm$7ڔ{7uhլTޕjcYfƭ֗|3~bYf=Dz>~_f_T? MS|N =F ^㒠%)§ ʟyߊww)Nd^tuz!$@!Ϯ٥шc9Y,]|nȞX@xY8Ͱ) c5,M_xoKz #^@[M}E]&K~^Qߢ'{Yw2\TStY+2/îVJU˅7WyKG|1EotkH&q4+w(=+,ʺ7?nTz`Ѿ2C8f-wvA70^Lv¥T_qi&EF/ObPٰBŀ AĘ<2%-. ӀK_Z[ HZ;g|R!(a!fՋYm b*=c ̓WĎ~Rh;>ƚ潸LzLh[+<^dz9 s>6&C>3P)v^>P5{VUSlI֟z[]99q*ߨµjh-fD!Y'\f<C6$KZ؄)R,V'Ǡ0PPF-hCk/Q*"Noǩ!a_4>OF7ܮD~p6mvٻ_ͯDS~Ya?% ;gu `(նp[}<5. okk}9hvj@g%lHkkSmilYnwVhQ ៲Si[O[O{苰Uc^̅iQ(2 dt"P@ QVg譖Fd6: 1%2מҁKݤPj/MEG#'Q66H5))tO;rO>uu3*N?Mzd΋+s/]v.?|U`7(z7&`rdM IGEۚ{Gf[rJs8U DL m¼KvJ[\Φۑf70h=l-|;7x>F)10H1׍~h}GRJ}Z1abzF7Ǥ??KҸ^nj8{KAp`"Jg!c2b0'Q  .,hT"BgE`4,UfyXe mIQn||r[=vzV~Ӥes?iЕǠF 3x;o㣞7.eYR}K$ &)Z3K sK: xI*K]q 0EփL匩%-N1`)X`>9[|Sxir9kUe~}2"Rr;tm / l>%`x̾e;"&W5᨟}Y /+&tڵ7[pm5Qn:`  xy>>MLpA|?>x7BR*:nq~uh{^Nm.-w5W嚗zxO=k2[=oKtO##Naߌ>q"蓔Cg'tM鯚l&vm^R?gJ{G_! ݴ<‹4@`aPJܒ o$%![2Y U"E/_dd|%D_܆bsAe`%xo\}VkzX|Vl5ׂ[Ebk@KZu6j k땪#Ay@g uuGY mkSt&q¡)#˾JRd(fDS49ӶBA@;BM6"jgYRڠS(s%@G^J399mƹRep~J˘fγThACڶ'[32vY`@Z1ԅJ r҈' J%MRUNb$gنL)=OMfdNmIH'/`Ev&d9!'G, &mR{пtQS^7)3A ]gFTӮc7xhzf#Rd*eԧӓqne%Q,4 )e8CleE2DF8vDioq-Rcחݲjx}xWޢVY#@^i`+ z ЗUZy(Ra[\+Ab)jh{28j7&O갅04 |5<Bp7~5n(Yf4FAgδRfG:zK>%"t$lhWz3'l}`p\쿞rl!R{HM оxevyldJkHX#X+&]Wy =oT:l$^0O۝ߖRڠ}!Jk %x(n:bsBqfX$#\`hW(T2j T%>&e4(Aj@$ge62%sa5s6\g?.'ɗfP꒤3k/c<חy N>H<Y>QFfxԾypD Nҹ *`5JɂYX(cVX-c#"lJ_^t`x`V)3uGRpW1xpdC{6jLֱ~@>^<1pr\n͢t-*n|_DCkA]@y#3 H* 9fY6c{;tLCNb}vWsfλbY7wEBr.KK76]1WFyXrb%Y7(o= (۝XlEoઘ/pEJ]b% \)#ն;hpU 6aW\"k5S]b`\Ar{Wd7pUb-tzR \Zm Wdfb \xኬ1v7Wl ^Ihu3u:>دZh o8*gONNFj1=gT3һHV=i2XqhQ>"sa}+dS d2vJl.C:}Zv锧^f}Ottx\9je ***v8Wo$cVRw\A:я'屋"bsOu⦳-`cRJI۾֍OVٻ+5vv^e%H#l&իO=\ݛȷXLVj>PɷzJ$jQhRt_oT巟~r_fS5ןoT] n,6/iTwi3D*oaAfsھӈw?SGo%_҅,Ay|jDޚ9=!Ѧy Tv f}XS\b,,Vk羖-3[_^YjKFh3&-H8h d c%#S`3yAlA#cnU #giJltQ;Ҡ!rcJ25gd*-af-5(SHkJ;d=FDpsCfjOP~rLs2PjCm r?T}G'lISCrŝvn]ky?v%;;{K UJ5ܓx.W9Wۄ("uPXps;.6ٽtG`0;wR6f'.>/!etE8~E?""fON 64JՇô8T-U4rDZrbN$ULOd>WbR,ۤ<:84/V7N)l9ٸj|rᏳG N =@N_5Dϻk4 c#7WԻhN@O͊-[Yʈ^1 =PJ]7#?ܝRxxX/- 1$W M7e Fbb)Es?^o(Lh}N%˹~BfyRf~nx%إe T"* nb_SH ۯ|]L󡚍jvxvݫg=yVU7ne-%_S\.wGttz\".EgDύ.5O1g_S(Si~Y϶Ya}fus]ѸCPY`ikhƔ潩3?*?Pϗr1T~N2U{tֶғ4ódq5~K܇5E-k._]]z#Av9$Nhi83%,jD%Ih@R+ɴec2,J(U|@zz)Us+Gh,B,5s6,zp:IoS eZR|y ,J8=RJA6Rd2]VQ=@ݲ~tzvIdS>޸ NA&;e!%KPQRiؔx],;Lz5}Vt*҇JJHԜdJ$.:nQ+!l0J8r'jAUH.4K?=4ݻ7Lsa&6ݟ)4P%~~s̝m)>/˽e 2 9IsQ mTRG):+Zy`Eq O 1jfBdPL- fսoRguF}zRr/'n|mY;*ɠ0ixq[ӗ_Kt‹vJ2EȅCkt[{2+ kj`V.fjIrGXHvG̮ ɞb[YL[gRHZ7"$4Ed%@FYc<(Ģl9 ^FA€ʐJxB J:sZ8YLf [fԻPL'_.TW.{|ǙْB7^;;=QY˱Wz',Tf2@1/Rsp^h})K*uB5.Ip6;;̞l99n*hȩr`um˳y~W DqRlL(4nܔٞٞ_P C,K8^UJ/UmQ)fRC@:PbJɺP>˖J!CBD2$ADTM/,*.3KyA./Yn7aRr)ieD9=pcp#H\d#ea[OT0|Myܴ}nyd×Q)6䉃2R$9\UJf%@'0;ݜ!gzq_.0%Q`g>`v]㶓뗪n/tdT'KH#ŋZԐV2]3d赱%[r3^<.b<*XC:]1שt'm{oU]xr,k!:#3]׬d\#ډi=r,Dk+Aq&R C %'{C%9ǂoֆё)Q6*ՀÙ2Y_jHZBb\8,*JfF㚱V {qơ;Bu EmmW ?LN{;]^mtZBtT75U*F̨,,6 d1Uc214c/ظF7iCUT 1br5sUSgݍ5v:_[&Tv78TkZZ{F8+EF͊_vY0b媽C!&2*6pǜ( C@F^tMafM pVCT-B L9Ѩo9UbFjDY#Y#qg6x9T9dRZLSfR4Xf)TWbɹcbL"(+x@(1[4 `G]I%eBv\DЍ5,:^\kᚒ}gg8]Nęl1NFQH*& ( Y/>^>C0}*lSṖ_q#c9Q_*+ԯ͈ ^MF.ki3Ҩ46#B;mF%G+RLq+q6Qj40jR ln~l?l;|_?|_?l+䨠d -.+*fj*-eZ>2ކ}OB:9)C*`(!UHC*ъE.X /f~/rS>kzڲ/-CZݼmq3 ./0nc.Y bfkTAx,+mC~WloPG3$`ֱRhrjsi4DKTm2hjL(Ͷu )=JQZ;Y νX2+}:m!46f"UMZAY+1!虔|i2UEPU4pGCE-ljGA\Dύ3Ō3PG\3|!SW>ex  XP))ZG8˟P%N^$%Z֮Jj).Fkr%'N^ԑLYȹ!Psޅ*~'=y9^oLޅ_ ;I[6ؚe=uu2+׆WIуFѻǹX>2fnk4~2rOaOc[#wC+^}뫻+nּ{G<ד?2M' x 6y6Rzx>7c> -6!ۜPl] U{nYP#yZp&!&;::=Yiq7Q@ЬmbHvK/<9͈ba{:BPL 1() `\eS[5] 'csߒs:uJ >uݜ;쌩zde@a >Pdͪ֫U+ gyN9G S[˸Y/>M%OwL|jE]_q/]5zˁ^QJ[#ëIik{-)mBQOikTBS~”61#唶YkhH5 HCbѾ0}~Z w`*i)@sX13U>y×wr*eL$i<5mڟJ]̌h4c8$ Z{|0:o+.G؍=Z Çz8|e95z*y̷>D.6Ŏvi20}5O6D2N[ReW!f E DE6&E*UJDSf&ú'83{ҾugucVTeXZE.HMQh EEVQM}P,*Bstn)g\Ȩv1\~YGT> GA:z5лlDKXlF2$:sY&ĥlF;+.cRxo xm:x jRP SSj..p%69+:K[?vu1.XPs:2o(Rծ$OBX'RmDJ6Q<6`lRqȡ;KW˶x=P M>7+w0ƽPlUH'_'@^!m)t&BZe̐pS2Nne@y HN((9Mg 攝VgRjIkril1Ldq +@R.ecI[!2֪l$ w"EJb+m j rԍ=36=zw[^˝l:(5*CM[Z%L,L*kE BV="so`gWKvMMoJbklN1 0b6bDpAFHGx1I82T ^$ˋYX>cY18CŒ){QGAyjil:/˯',[F7ԛw&%Lg_,/e/°v/9EA,xؔ'~\/~Cn*o4_24r7?CG~Sq,m89)*@1j-Emqi4y}$|/^;1߭G|߼y'L?\,!\Weq۞ gI灂2ª|~cMF[brGbW{+z ĤwpyB\NP且/YЋ*,{1jY9oz)K4|te4jI̸ k6MŴC U@k[k!h^bZ`w6A h!}Rx 5<ě"-zSLU4JLk^r- n|#n$'V_U1& B,UbL!&rȐ}zT?9)ϯʨ5Lj{^]ʸA~g›Z|Fկ?_[M!n{J= [xzyڶȅ!egg} k\=ѮZmlgBHtPO:|lqhDoZ;'8%;`{֭&q{#ӫH!7|&nFٱ7[lDsP9B TmnWj\.=Wڋa Fp" 5F{F)щS 0>o!<5^ݳň,6^M0sw7Z^v ;%Pt3YjF`wlJ1 Jv5z׫Tw=䑓Yt5n;vo^Ⱦ C*jd /]~8`0Pv(`M?;TI;"N'.3h1D~iFA&i;hv$Ns@6sKF%Iu>D E:`j@9jŻs c+>K&G܏B r.QF* Q Ի}7rU@愠i !{t1α HLpGseBvcg$!"`39/ 4ӎ-"iUUcDR2Tu`nu{]_;]\N$X/Zyc34.{\K@!]ζR( gmFM$20% @$/ %dόs~VKh%=d*VQ]Nƃ܂F-y" jU0F8@';y'p;[Hs>t\vep`+Nz)Jivؚ)MZvľ.Po4@t4HsNm2ReG,z:Y * tJsKoI#^'ө*>b|_;S:%Eǚ@V0E* <K"L3!+  wKuja~1(so f]zg'-!u+,ϫ[E1mmY)]4LQ1 &6ǗۦкI&!j (T*P$g qƹlokg?.Tu}( `]s!3wqCSd?xjͰ`C/0aHt.0:L,hi7,!Jo!:fQ09qg4C^1ĈFɔEŃ!|2.Zd^1ܳV3wa@HDOgyJ|8_f*u▯jx7MBӪ&:XTks.Xpp]<(C@hN#3+uxa*xiGqzK}M8HpGLrs(eNVC%~/gׯ?y*\਷H.)S&*f8msp`R6!"x Z\:ZW{ݥw^i_>JD;k]AbOVKvmooy<]qi~0AѬ7=/.|ϜOF982ܑyC\EP߭C/y ]/"Q@(z% iX˛ZwsƇIT] L'2XiC9be9Z7"I.l|e)}.%|X R-s|)XVנj *V*T38:F8 lY 1d,XP9A\ѡNT2dz(aøM.ƅ̍ ʍKqo\ ˆܸH Kwmوh:)d DWhhrQQa~)b7ab)# &ڐ¡c +HpS"Z썌UAy Q=O[ƬZ$k/ѣS8wp-럭Yx>|L:_ _a)Iy3-*-Mb](*ɊˣʡhԎ4,2\$nI\JɈq-YX ;FmĎr[ܽwtvb@먘 IFapJHUŒYJ!Uй)E'cPqBdz^(DFaIZ[*ǔ&JYhZgM:W@fsGPVm@xF q);C b#)H&MZS;p$L^$ϯF~iMFRf"o!)ż̩d6[.A8RgNjO NO>Xy/#1E$(nFh%ɱDvƱW/D% k8֗KXݶ͞t@z6w1LdZ)m 80q,vבueO ٌ]}-jdy?k.NO%BPkj_OSմ΂h73rn*8I?1wuy^ȡOd@n/ܘbJ0IsELuN2NAAP unI(5)L|8Y7=I_i7 ,mh_b:ڨ]QMK2ݣ2mH.gK)7.&c EIl2iR#!Ys= !: G䤕0*\ܶ v5dN*i"ՋegŤq$ d'bG+Wp╸ncM?x;^$?{WFJX 37_r{9p#૭,9jN~~%[-V,' Mb)*<?wSDvDqf g5ɑ B vw܌;=2kA[%VJQ3YVۭYomuG`J'^;jv폫lJaJ#Hm'g)Oc.*5 @Xe[S70rXE<_/Z.7ېxPvwfʥ f<7FY훢݇}[W}?nւhۖv(y;+b;3YnBmp)yk ",fr%\bs~:lͦ06L6~;I&[~ZՖDX퀴iqZF^oVQ۹BB,[JL.0">>lxڮ}N~"%`yqT8. Fkq !JASD=n%T8 %^{pL"k/%%\2NXYiH4 Gz^M R`S \)'f-<Lm?2&]8W[b\S==wP<O4B%2#`2Wt05R`#$v{ {> ^oNi)Z֋IAO>^fn7[<wϊq!iv ZރcY\RrFJ/qby/ A)&)&<Ŏrl@Yw".hcP"8i!RJjr)C)Pʍ"VFudXD04qGkZ 7FM,taO-Ah1дX/>]l1N6laM5Z'u5b>Ĥٛ)WrJ=K5\HɭU"b^ ^^~;d'/D$\Il0g``gd"]V;dFCVak^LGR\2_?s uH.6+ G#NjRSG$|G #LHIC[ԫ]qۍbP )(#Y :Wce![K!etb %6x#&m1\tP)JWPf%ر5 "$6{pȽ?ؕ5NDyQ(p׆c); E6j*`EKa9Ja1)U*rRg6**b)7{rIPhޛ8O }iaLt EV2"] B눃`qM9#(B,`7 c ܀9m#pb1C6H^1Ԍa k&Ht)7I? KGspr zXL{!ZiA3ճO ./K$YGH(򹖜N)bxЄ>D~DHY_.Q/2 C{هtCwF72 T\A*0|8U2WjG8W2t~/y78Ŝ9s˭ޗcʠLU=>VO+eUp;oB!1qV{򠩘YD NVF?~XkW'uBY"fo?W7:0P A&_xrLVf 9EKn"'Jќdsk,QV8dp݋_ZR1jIaPf )$*b}ϯ='\15}o}!qjͥz,fQU 8o*UfwX%/ ǿZ.%hT"(M:Ʊd## ת]{K*Ljd<)V2""&ZH0<_R&NՅ5.^?|,'SH90[^3>BM qח`y 3b⊼}0KxwI(3&#9ɕR䌏DaV!Ģ~GIk5 `0xϔ!#({`{0g@Yf&,R)S̜Rآ!C/flq;P@E5?Oյ%%zK|/ VZz45Xa]` Y*Q' T! hr0<byjzYIK0@QPHx#(0Gy:'aQa* < 7Mz j0A -_s hc,,Q_ĐuJ%%X[D= a7O rnB1'ԓ@Qߞ.dOx#p]qp&͸lPQ$Qp"3T2bP0ŴvDM!L F*R,Rd9DȣU#Q;eT\F,δ;2-t0Ad狫(CH hrbS0K 6bZȈhIQ+مî5:f%o_a =jպlȖEߚi3g.Pr3Rg/~zAdO\Ub J^v^vk7@EKA_]g5?ocni*%ևV)۪T[/<}|P7RNΖOЕZA~%}: .uXƷշ*ZkuE)|R&Blޤ8: w: ;NW1k/i|YjOkF:%*rSc}#J @wN+如e3 wfpץrmtenG0̃uVa4BaLYfV, HtK G҈d^]}IΤ<t9>@$lڭvhA=eu1k%eMězш:5f WKAXֺҘRL j_/mɊA+#2uon-{3KNTsz,d_%j+{SLMi\Ԡfvs 3_)IZ2DptNU[}_IZ/LaoyISl,c܋z2pv\Rh|՟|SZaDj>.,  T YThG~Fm_ Y:Y[C@RuH n1٪R:^Z5U t Uf(?#ym NQ* kA$،d_ -{X&T#+L,XB| j9d-Q3VF\|+I:P ) ێ ȜDFRd s"FńFH&^CQ{y!Ie #52(B;hN Ip73TM٨RQh*p­bFj."bCN K)PPp8p߱CCVOv2\dyp:I!]"v~&+,߾ˊr #7_=󬘆py^?Wz}].{UիYs-|.sӛ"QvU;k^k+?uI6^kVqX}4W`4|O?SfMTO{@02+͎#[ׂ?8Cd.rG೒㸺5sERb>YW*tBQ#]fe4=?</ T(H|˱fRT\1*?(5 R!j{or5JXLu{\!$&d`'vhjHV`DBxtF<"|d"\QOj:5 H h0̝ Vo\JVffe E8[Mڃ7eMd-U?5+^4JGqa> A %ߖwh2lͅI[/ _͊9U?fͲԚkQ!ͣǏ%*0)*Vꜥqy >O4oxXM@A$q^ VO[oxea~rLe2XХt誻W|J)V9m/|TSLL9J|tjc^*AqTb$ i!/N&"}ySJ˾(f:hέ2QAVtBYpH\18ZD(|!JG<|MCBޘж&nٻ6r%W|G:H s1w4|q$$\bw˖pu7]ux,V-5!1 1I^xe`.|%e+RtYEnj{ձ֕W߸|::!EhW4yHvƳ ,1_\bZꃍy9z5RK*gZH]>]wD>0ގ^@gnv ^?{խ S{Uc^e1}tIs.ӿR6M^ /?iJ{Xi9˟&wiAvJ$=eRv@'wl^vk^KrE;(p'b#{BS/?um^б€4 ذ+t\~k{wmҨkt -ǶE-jme"j6w{J7}OnjNuta6͈qFZ8.QRW>#}qRfhzbQh:[ a8rcbTkP*PET5/] XU,VѦL!G:rpNJE% '/̂URGS7R,9 |Dlb,}2 SF0 VynbΣ5uDUL oo>'YWn|/^,޼WJmg9-Qh5B 7K-m͡xʽdP E<'%& " Vqi@d%XK&%VY #rmJ>b&s#'GYI'p h5qvt(7B1qS7-cpmd)KQF!r TRaf̺ !0csMmH*ϊO19xH(FH%k7afXK톚ڜ͌hi x,YqKϊ3Y>^N려O6gt%\A9}Gܚгm3_k)[3*䎳ILdb <+5x%'Pr@ Mr4`̒ 021[/38Z:L UqzXXK3B0`3w\^\f&ZwN۾AbqU;;is5\狯Q+.$A0i, ,1s\Ir3L,Ҧ*bK`$66^C4F *!KI2 +#v5q#N⊋y,]M;EmUՀX21,ĐT8 neKrBq)qÝ71ɂLR]J<C&fȁh!-89$ b2p5qa/:`<D"z@7&*،&؀Lk2 yffw^먕NJ;Y9`;i3 oHN7Ԓ> $472a$#NRMxsU{^1uVӒ}qTE3​443(49.I&as ')c, !1)px,xXM;C¶a ˵c 0.q{LяVԯ%4\ĉNw\33cx(֋c [^ex;#wTG:sl<8cP'[fc .e%cAIBЧQ9ImݩVcYwy>rri'1u^H( edO'е!BԸ咳<}rzx8 qTK38?l^fzt!VIWSZ_PR?r/&_$+iVriR"J!߷Am}ApU* bઈ RHkUERl5 %\t3ӫr';}&[&nݾSJ29wFFG;5Ro}N5.+0=tWRg(뻛gYEÕP+lB]<ۺHm @ 4EH6ŃMI4 YlQDTܐIWO7Sw8{f8]MWo.Ilw߮GeCeV9dW_QaZGgֱ{>!vy-mElݧŴ~ݜhAq? _M]p|_wd (Xz#g09;+񞯵a3 \ě[LniڭBEm)2ޫq((3Q*rIRţiIL~O6gw}VSdVk@`[Xl)$8*eWU%,4 )52M Yb"Ι+2"XƵ֢9<x3'ͼ\lO0ť?rW:Ap>D}1`SY j*ؾx5YK1 16I1q*-GD.~M.<wN#l92b|qon\~T%/P:K`2ĂQX29I\y[C#c#".j< 11)3){#w)8RU \TxSBgUI:L&eF#r[*E)VNynY78Cv&#ƣ#*y O.\?{]5@'qg@E!95lp"qƆd"+4R)rzVO-q(dhPEj{vc]@:gY/bI6Ayɑa2q86ɰ,$'2'1e d/ k+|'ӂ3.YeQyeI09ID]2eɕܨ5Ū%`Rk~BJ'CDV=/ 'مtĦp(3"ͫѹċ.[vkb횿lh~Iԣ۟;md,^u{]{oG+W9ִM~b6ovi?M #ˍjm;GM}4D[N/ヶa^6Z$׹q3t_4;Ot{t%﯋eƷ5=?- )kJ`PiWJgGDQ*ٰy-^x)׏f2%|-Igi|VB,>7RHJ3Hv-z;ؚ$ݔ),N[/{Ωw.fGG9i#2v'5c oG?LW"I⑾C:2|w;e~w9mZx0跘Xޒ4?zԳqyؒk۠d9_}@]7_.HBc2-_ O큰ו%rI.|ow߷Xr9UF~wos&ʼnjcK#Z&%&teqC?\:M.(vxڦ} dxA'6>pI?۝ CþmyymE ܯ?֚>_w -n:g'Ɵﱡz+6JqLcJa3+M7/2~puCÕNM[ڼt5u#׿2ҺK SJlUv][懖CJSi%3H^6\hs}vݿ:n&5xv![}[{I|{lj&< m^7wQβl3:~:m$e'UaQhVRE.%#=D3Fy8RK#FreU.۞ˉwoSCBxobbSbUUWLQG QO1; e.EЎmѻj 99u zmz|f*Qz'':慓US>׻?J#5xf^g0|EbjOIPYkfA((&C#adXwxvFePц"`Q\4wn7GG\]Y\}A3)ɤTZRr*8TQS6z\Y슬> *UI9t~ԇl|$)fe љM(YCgd͜\mb.+F6UcN*yъ_7>Q6@kE/֘`xvyP~iMibHW~n_v\. Wo6oX}6mӿ,by^ʛחX;=ëOzݺ_^5K^Zi^E:]6UAĝ o]d07,~|= ؟߯B+ cq`[28\rE_y~?no$ᇃdu6N]>y(BIn.lM嬕h"Oe8ۛqԴ~Zͭnm,~$}^6|{v#@flڪޙ|oi`pu"]xG'`Nfyc.+TP19֟xiu,#q=UW}ܫSyxTrf.<TBk*IDٱ%$t%He ^phk i1~ LIm^$9TB5|#PQ$ .VX5*ZjNXdҒư mm7#`\Cp%ڵW1(] z]r& ћ4-+q` O j}D@s$#YqȼTTQAck" 7k_55*Ue[a6\Lu֩Ǘ2p`a]0>0D p%ClCгWjOjOjV*d_JXR͎lPBF[ ;"B0Jޠ;7b`(f0GZ^}3DF{ &cUnԄT\i9V&xJV[|I%~'qwrI˺{vtA2+(@rUX pgzrg8B#m9Yf zHw6SXˊ0c[JA (2?3_yvR(ff$'qno~^k(rַRj/`:-iR6% xA3jKqoݥSŎm .ȝQY夹im NU .{߃ͳG\_]ԇȷ׾ZzO>)F|:oFσ,DG@rtK Єgui9T~,Ki2[5(q S(?Ԙu|i-i^7.8O׌GUnERB)TJcH15QmT3I_v3@@3dO)7s N~L8mKP! f̂c퍆=x&WEh*C!!VOwϐZyT@{K RB4d&{ϰ9õ7t'_;+7Wg5ݯo7ɛt.z==~׿Q3 CU pnD\Rm jnC1kdjtMIsEzJV d P%H*baql,αX[ۖ?@8.8^_`?-oրFbV@,7BQʵd0(0*IzrIʂ4ۮ+ch`/d6:RvgW%T( dLRUr9bw3gĎWĜJ6:ڦs6sԞ3؝QJqXԤbU^P @TS\ 8yP%,h Wc]HQK%3-7rkrQ9YIK#P!j\\'͜pVc#爸4x4Wv'VZi)hbAfWC l-o [Wp\vz#AӒ׵řcb*XpX8%1W$1y?"W%..7z5SZ6Js\ts\OVܙMF$+0%G}M2)6?=#.N%vCx!lJXn#~_sXFz_7W?~ǫ~Dܟu`Jdx}tnRܡ1b4kG=P!53FSML:7zG.>UHAANb1&ۤ+K Jvh-3>ٶRA ܶG(3M f QLm`_@2U\RQk7o69۝qrxcs*FDK3̼|Gq>#{5׏7$fW)_5y`e0zu3}h` 46sq;L&F58DT0$cR);5cf(G՘n5aXc-lQbIQߖCnZ;Ѡӷfδ-4!%jLl\U*$Dފbtu ko%n."7ЀG˳w-&WԨ_t9'X3aj||̀|ϳԴh3hi=w#W_/`(,I1*f@ % !SmGZ%C@g8:6Xpzrd?o30U&XN~~}6ft_#wl;f4&L}w׭צN)fO <̒.i=~zXݽ1ޏs{NOCh;47P! |h H?g)ۼxLG^OZ IհZlbɂ-KLG<`@&_v(:=JMi#ڦ ?X{-ȭtCں @E(eȺph'=Z65]z"ԗrS$iO8pԩBPT]2ZFEUC!t0zTYĔ0&7;`\7S9mLudˏw/k(A.o?0i3wBT.pX@,O B.#ZvFT禅 E *UR5@c]IV%I] uHjd&tJl:%()`,4cZ;]h(R'fC:嬖7g!\63|篶`\[؛hUhxp/Mێ[A d`Z6Dji >R76bM<{n7ߞ~v\*X&wh)XK)TOtv6e2ow QZ[@x=28ƴ\Z#ڔ&ŕXjm,~͜l߾7]U.?/*8OБd:8Tj`y >+(5i+H; dLLQ=]2ׁ8k-0jӾUXU18latz0CD$jNZ|I3N:ﴷfKhL Q9l$ &ucbR2Jh49pt .6&m ?>d1k*P!Sl3eKGodtq$f5%Y]go""^SOFv9lg۔!Vmz*!`1K.'{S5avM&%mwzlk ޵6$B_G,6\F?mŲd{Bأ!r;/b)IqpQw(EKtO\h&6F .p vhG"FA!!eHB~wlʌ9JvDЦZv JhpA&hat ! ‚Bgdb/J?"X;Q 5ΔxB~^%)XHڀF+4P"EA(}{AKb\y9b]c)G]Ϯa/+º [jV0k'TW$&5 joRDTv򀇎gwaދ'"d!r V6i =b$ND" @P-zAVvjFM Q `b2!-#R GQ\x  4eBl't5[Stp1 SG9WC ;i4qi?8Ltp>NIf?F]i'q](8(?P /8 Ui}ճ(UsaGU9ECyҨ&?ה/~_sg_ǦGS\sG%& sZTUjƵ=ȲbFw3ڭH$rgJSRos_AYM'OpgR'Lŧǭ!Tv N oV_i {۴:'*pD5ڔٷG%v4!GJrN?ir ҫ84esmUK-J8j=ZFb$&OU{ܭ?k,+#\<]6JgVM {>%l"_}^~ u7D/sJFƆƧ16!@a!2dFZ_WHvXw-iq7]Zb ;odŧN5|y#0vY=x7j]@H\k 1ME Θ P1\מjR 3),-樀԰dZuy:-pn-5Ge~kT[ߎ*ChZ46q{4[w7˯++9hi @:^/Ƀ˓IIjт Aۉ6>ɮ=y*2gzeБY_ ]l;}1p4r qdr9wLV}w0) ;c +$X8uɥPU{Dzu0Ζ uT7oԋRbEڽ!ZJyhH,K> !}=cn+4B)IG T|Lv^KoI<\gJмZ ?K~>)NR[RH[e\84N x%GT+Eg  d?=riXIR:k;x<7/ ΉY8펢֫Oi ~؆a~ӣ?eosW0C雽Y41Lf9p$.:x7,ָWݑ7V `=)m՛=5\9<~<h %$4s"Fc1`)k O1q]6 &eF2 ,Z#ՁHOٲƵă}u(ɾP]/61#j#}>4At4]tm>7=;~xa|&4-z@%x%rsmHLS3<7fZ:M@CD#d^޷սbH!R|HU7!TԢ ށ A+ryԑ>p 5~%fvH_lW<[;p4-ղ E?XCdFkV7<75Ceq,J#Ta̩ܿITxQLz\ -ibzStSB'фy૝dǵ: BRs^܂ *mfKnIjƔ73oL4Vr5Vga3sy'3zphͦt3IAyjF=ދ(g&1a;mff=V?qܰżӭeɆDt2:e+Ո:[f4_Mǹ^0*MRBC*!:Ǹ67r(B_#tς ų}ZDHB7{@2S3eJQID.,J9Чr7v;oD˻߸oj½U{mrZaPu TwlPT{}v_-NA*Y&Me ƪtK^mw;‹N/Z^gwg|7jI 2U2`XiP<yE"gX1q[q[jӽБGn})\M"-KKw\`D!E]݃gc%4ǛDJL&%Ҷ[ǕpZSo:/Ok&DJ Z{Re9/j\;}iW TR(1FF!4&(0\p5"!h^XBXB޾}f= ('PZIbɪ !C  #\'fbԁs_'/h[:/nd'7]z7%7wwsP|8*O"Wz?~m}7.GQ ȴo _㳗ŊWĊ=MjNV[t wÎߩ.`ڂ@YQT_:FWڊ>y gj^א UtޝÀۓmiv<]?]8>?>ξp팖AL 1mJ"R#QYȄeȂjPA`ll#Qق϶lA@yFL$*"Yb7gŎ|lvjV[V{@> H$O&DvR+$:"ie;ޢ )u{Ȍ +-,ۚL" p0pdBEg9!4n{~Y<}]-il`;䋳'']S"'3bd i&jqQhwRcƤJJ312"X\P*8ؼQGeY}u6%w]] ]H:UJ@&IJQ@GUX/vqvq_aqW{Cw7{~yFn>OBH :d. ;w{-N#|;>ёOE dNZF:$>S TDA!rt1lC}ȫmP/|.J1Kvid1MPcVAi™e5n%vxǘl.Upx-(7L7<4*>D}Fz3z `[y.>^5^]ZzVAŠƓz:y/<~l^Y6yG>0Px^;kv.c?լ O\p|?%sw.OJe/|,^,YJϩɺT_/#g/ =|&6BoMhz8o;$jX_ns[uymÒsu_/F?~F_,X>TT"XT28eV;&Q|J$,Z&3z>*/Sd"焖3 ECf72nr3qnYcsB犕䐬# :frg+ ҘTĜD&Tk>yZy!q}iF `'"0QGy8:(4zM]06NWt:< 3/GR~տx4=&:}7o_v{^%y|}4JӺkNy~zN#-X3_ަI8:^ru9NxeWt#_Fg!˭`Խ{ H]:wE?9:)zURHj+Sw4 &G)V9:A1 ?qz;}/E rT9YpMF;L+izXΏGBhnkc8Qj?Lup0X'jq.߯F?Xϫ>,}ڣ9BY q#;ՆM@:6nK Lօ 0I9c--F>5x6s%Q7 nv+GfG>*u*)@HjegӃg}-}d,ٿmk~/-kǼxq+-Y.pv<=H]JvmP؄AePjNr?OGbUȣ/_j\?/ph|Ϛ:Gz>K9-?ٷ € =>O^;#'2 TGTq^Vw{Lm2,zSޫYg>b?V*~_Yp5a(}V]0侂_/H !DNJe PHh M%T"#V@ce b[SOD"+הRyZGB%Z_a]/ 6j'6#A(}!=4C@4b[zfr:-L"u1IWBRV_WgKPt6D*~IygW+TT9*ͨa\Լ8Cf='A"Vס*cc{Nb/3>75Ґ8y /MvQ%[O֯g-}MJ:"Er,JwS~1M' =Y7n+qO1ޯd-`4":Dby*;lKN/DZ}36@@Y4kJߎ'{zit#!-ki]y(7M~-6[TB۝uv!tF <'`:y]?\Z$R<U15Pckr˫b ʄ9#CMwIwӽvٛ&[^0 ࡒ+ vo˿sr lx2X\'S\Wb+U)- uMjROju+1Ψ?]s vA! 0e+ː69MidLnK, ]+ uXvٝEr"!ۺ+-ʢhk/J†F)@$w>hKk8]Z $\Efml3ЎotȻz+Uܯ#~ i]=spb τ@u&wzYA3] L4Z^^^> ^J+!Jj傄$-8ShU^rt\h] M% DFpe<ՔĘSKHB9{[$,Cc:v7l&-w,9)?ai3>+^,[^·_腐EVlPw*uVNe1'￶7wy:fo(n;Cnq%4|x{x-zr|knZTtƩ :UkvW=r{:7Nf@)T$mh@@ZGJAQxfY)r6Q)qk~.EY9qe%AU0NƓ~gܑ yTobGr(P0 >*."\5fSN )_a_bD|;SnR Sw2)Ǐ{麐fD#@7gZ,_EHP*dnBF9'Hi!@ Wuق/%'SIMFY4D5 ,RڨUTXRVOׄmחBksy/1݃dhxYch*K&"N8&t02FT" m݂dAZrQ)DBM5SNgI;^75]!(5x!y"֫gYX+XRc"Y3qng^Y?Qǯ7MT>/<",%Ed)W[Dq"^F9j8Q{?O5oL;k30Һ$f(0S&H 9]٬(@#Aچ0mr<}XdW3 *2o H Bo*\`M#2?bF)6QT@x~s)1x:pc0U:aNЋy]XݞJȗ%¯ |5Vnp{8# lJɁywl: PmMAئCK@2'zP!6DC1k cFYYQ!x] %b`i?:L2 $LJւI*cS`K? 1r zk=kH/y1:*#s5%ifzK9?מ+W(jhyo_Սs6?fGۤo]->ryQ˷<}7U_E݇|qЬ"j=IY{Avo?Pإ*.xmJc8ޭHms1u4/ɇ. +<ˬo|^T|c_Y;:/eݸ?1e=5??n~hw֟U?j t sh+osbȼ&۔շ'$PBMѩ5JG~>aty4_Zz~G][%#GVP%>|΄R\f4g,߫2q'~U֣Yެ۔;Eg W-hsnߦ#Gp]1峕Cxm|+xWȱ'R2a/vL]0i1Ʃl;m]|ygˆ\%۶~Q'NLzUKՓC'ǥh^]S-x^0ȁ(P,D+>dk$"J 2p6Vm#j("T |ڰLȐ¶ 0v*K 뺉>}B ㉯$po6u1V!TyB%4֡QouX0Mb.ܿId? XMQ:c~gQ,߀cr2>`0?}6Oa*~X֘axT"o/_}3l:'iZVkJK{F>{w5꒑6pm17k"`3=}0p Tjevn2( X 6TXZz)Z-p+,V*%*, +,Za^b=-QW\/E]1a喙T9F]-^םlc~(Y]=\F-QW/uA]:2A RWLŨJ/E]UjwuUJ+zȢ^\'i|7*ᢜoU=eP;qn-dAiѫ O/mP'\F9.&9S9nx5mlrL[U=pu]ۣ>6Tpb9mO6G '%VuZMgstQB!oAUpGӊ(3V?7xpViwoefUM_#޽R4Pŀ]x\}Q;UI-R(R'- !ࠈ2ɕOѻ![ڀ[Ǵ9`A_=E(fDEdEv|cORiGZGJīZ @V݆ ΚVL$P%TwŘHТ)z p\NWχݷ;@Mo*֋o]Qּ/B}CBIWcc.Xɤ`eRv ("x`(DmT]Lv9>+'{Wd=X+FyЕ#~ gx6,ms/V>so6xbwy,K%x Gx1Ly긿+O}J>p]Iam/F|3)T#\(Tr%Ki 19p蜉SZC6uJP\̺꧔3kD+VD&q-\κm6˽js5T"=>^.ꅷczmnwUqg\twdbov/QgZy,438U6C4BqmJ|BRRk7f4dKJ,tM0D(zrHΧ26'FyZ3Fq82.3vՅ8.ă.|R]mqQ-°y\n Z?M>拯\c  Q @uZ #@+6"3,CrQ؊TA~ I 6Um*cYGJlaU!XHtLhi`=9k0 s_`ܱkm}{g!HPe,\$HbS.i(,`JD6hc} \d ! [ ˺&e̒akC*:1)DCX#~}y6;K*}шǮ A#4].lLƒDbˬkO>`L2dmәX, Wv:!EFx *YꙐB$&*SLDFq0r׈ezq.k|qɮzA/]řtR䣎BC-t%PQXX )ك^| /p0UvӇOQa7߫o=֭[m݅QA)X[ڨ2TcfgsDDά4@cYK*d%E"G*!;{k"ޑj5u2Q.sE j\?d)Wa]xP݊ CH&m&R]HQ3|d^)faQr3]@*GYշxݒ R,Q9{;}:wuCʽyD}y;rsvZ25ް y%AY F+U"HB㄰<,(.yUV]MNYAL JʙHh:`RL s0tȹ;wq*b!ʖ!e .Y!WEȐi :;pԢ@P[:Q lp2u}wrmyS=5mx4EicûkӎWf>o^d:7|:-7 ּ߮&M(i8g}l7׿~ݼ~:?جzO'x-k, ogV9>,֗מ).=ºex'}H S{@Ӿ#7 k4R,V=صF5~չ攡=;O~.G'J̛Ia?uwNgC0~vyG?'3ɸ=Ũd_gz?{ȍ!?id_Evo/ M|zeGE+vi%YnYxIW\2JKa:!K2w&yZK3xLbv~0\zJߐx yৗ3az"L4zv")-X'MPg={0(dBEp&:6eCi&B"U}׿G?V}=`>׍(ǣG=JMR)47#jsX`g?3}U 7B'4<;g *թ +j*KS;ŒO7ʭ3Lu C\I&K 6o8μVoi'Ű^I9tArR[&7oz?uЦ'Mou1ɀQ"3-BdQqKȰ HB)UgZ|iԚ,j5ol@3@&E>(9ZN;[flp&Ko\SpJJ^ۜyz^"q<{&@/„MDh$>&-5WJl)sR j\'/Z$O>b>EfٹlE.ȵVnTzU FU7cɄ %nq^ҺS gXbE{;&Owg3t ׭;mhlF7 ^(vFF?&'%6V?}JC{ww2Z{SRZJPe^}5OË_ogFD)zZ NJ++$dP+ L*UT:*yWw?^lk _t֔4Ml\0PvSiqz71wh^nGs!n7TLq9lyƗ?@Xh(%S> k\:(fshVO[ ǓCS_D K!%(TNN{m cb!i3(dMBmtTABTN*TS%k1['k/:Z8O&lAI6B/zX[<#qDAW^*[ʢH(R<2\$nI\E t\ pc$5|=biԫ4׳SZG\H]) Η\F2a, !ZCݮeGH cEׅCXa֖ clT)9 YgRj PZPo *]>M7HCHP8m瘲34 6rdҔQUJ.ї J^ۺPG͞)MTLDvU>ձM6Pg4 6Kד|ic:O6,] U$dWk~AqYXji[jl@$hNldc"3h+O_M]f|ϑ֌^*fof)``>~7i\\q!xO_*W*ĤQc,yϷw7MyR?.JX7;'^@_df4F0i~/LϢ|0Ew\IAŻy|b)Q*4˓eiJ3 gLJ؍WLRCDV~w%q%է5էVJɵoXrH* {U&&sTTe鮇K,4XEӐ7aӀVI.yc8j~%Wڃyҋ1 (rOXz_ҲR6 Ju;*J M Eff˂,cј5km#|hKIʰs)I+Uˤs@Liɳ{Bt@G䤕0*=.}ioێ@,!3Y d2rTz[ L\0N=A\@9}8QeFw&_Hf./yo3½e:LfpEI|HC|ƧJrEy)ZExE^_f Uu1i˄)ȅKή 6@֭:j!ق2|{OjMl+iw4i ]FkڭlP-q3Be6|(o[3b@&PA%AB,[UmO)xߧ+ GZJx. ߾7Vpofn~^o>[Mz[%pB+.|w.L yu[5ru5k5V,M5ŶGZBZ{CZKr_|`+e@v2L ʃpLh[2`9`Ky["U=&$eˤErHLrgHk-)e-c%4Kӑ\cbD2<FAHNx.1QoDv]:1t΄12VU3Ff .7+mȿQUv ^ف㍡U%@ E0^c!j{18LH|P/9`pp˱ҽ$d0$Z mT6.r X |#[Rr ]RdYJ6#W 5Ƅ,# rlƈd% }4hq;ŒBӒV_s!_yz9a7Y0pA4Z'u5b bjUܬ_Uk}_C(jW2 ؞/DIu83} ;ϓ7uDZn{{# :^:"BQΦ/4B+|k5 nlcVʕ었ި*C)$w}xOqc/0bQčQ&JX#\q\DÁ.ş' JNzΤ&N:%Yf1V"| {hu#w; J Fqκ 6Sv }Ob!^jn7<#o`kOg%` 8X[ ٍG6Kt{nhz o]f+9]oɵ]߄Tn]pozi' o[w */wW뛧74:gsY7HwVtg5]7Z߼ -/ܹ^F77kڛ^]!0md W74]n+[KnKВWu{^r퟿bx{u({o[vM7ǎc+}+bS\xOdQ4#BE#2|-7`戕v,m=k _v8Ď}yd(25(@28Yr"Z{[2|A"D0F1e2Dm\X%'g q 884Yw <_z߁w1<‰guEPK"\Nl cǑdCpTF ؉Vg2Y诘8qVK>,D\{^[^Aik<&bLfJ6ᬝ(~pd{C;O#bFUˋBtv}]V~w@jϝh5Űri_$vh zB6˰Nl!?g&-^2䯝mz /t}3#{!ž ~]^p|2 zi"@:Ÿ8ԳOb"Kݙy_}5L߼&}PILd=- ?vnmVm_ φ멯C洺նaZq~^gI԰a4~6pDMnp3zᅴVo&fۃJ4̯Ih Gvy4=o_ s=R*E Vw۫ɉ -y" jU0F8@'h՗Ae{Ezz}#2qo},ǫW޲<@i U©thQHXpmu/yJF:^%l奷*X^}-n^5,kRZ)a*Dgyo#D:y Ѡe\IhȱYݔy,q_jߤk_^`PO͟'%rZ~,Ho2l(ozz@A٩ۻ"m]ѸQWC\t1Pc΋eӫwm+Gyٙp6$;,01J4c3spVS%_(K2eS6mDYXa zcwp@R @2Uԫ$Q0J01?;)|v "o7^VșX_|\ҥ)kq+MpP}^Jrl4 '⢲oVI6fI%$~+ҹҊ nd>؍L>؍L-n |zc7qw2 +f'ʜK[m YOߏǤKl7\}ƨܼ-zķ,Ӝ%tNsjP3@&H!': F$ezduTA0eȏ"w1Ti4q.Oq1W 6 ֹk^EkG(㕌\BQgj'/>"RP7L佊"ԩ !M4?Z\?^Uj635u/ٍ ]V;d0jM~"C[L yoP)m-d6'd6_ETPR͕l-&om>ux Ӷw&o _Ӌ!MG^=cá=q(3=ԃlƪ\}6.rAM=Դ@^m%oxf0<'l&͈y#lҴV^o،y1ּӭeņDt2:ePgv( , ǹ^2 LRC*#)*;߷ar~@0 i2B3әJ>iҫiF^#Xej+j9a/WQzf-J pSfPWgڻL!WWJjzp8Sz*\eA \er ?BjҾUrWo8WLL3+]_w2B pJp-1 K~}0}%iܰϿBY;բi;i;HqUjt` >9.9<^h ODh\"<2OUr0^v>zSvaA̘ET'L$d$^K8'ކĀ `"ڽ"2/7l< / !P2&CBf"Ҁ@mS6&^;@ԑ׉oҳ.upJmQGGD%n Xk6:Xo@vAr\9RЂC4ׂ@rCG3zZg@׆<&ೠGIIqGY| 1(oDeA5W5gz8^9$^dE#Z?&i,:6>: KRKE#<2H}wRÂ=wܕgnN?(+\bFVR:jhD&o9-)c1DoTlnU7 pL4]=LWn?% gqV~6܊u90Q.ZIPM|0qmXۄeкѝσ_?༅CȒL4.3!?M'O?07;бy[MC>mt^LC$>)>xv몝 5"\ t ,Ҧb]*ZIjHMWU#X.$Yr1{W }B'~/ *CGkE&ils|ܫIbKEJt<+cqlŧ:/vzBO|߇9?2p1ZD`fwWW߼R⛭47}nsLhqwUb4c"/ϸք?ܛZ g7R3i*V;HjM~d[̧<6݈9[bme{ݼJnr˺8H1M*+ZLb}Am6M|ti5;)|uKgoT q34&9B̨LS5Swgr5v ExڤzdN pq9ڠætsIQFEG jzo2 o3b9@U汽^s\bkփbg"pZ N]2o(虝+ ˅\{/U&i)!1cQ;9F4Ӂ҉Y0~}@=~7K΃.n9)hOlhAN2riyXz}:+8Kxc>>7TCXhvN^$J6F* v[u;HV RPZI'QM N2Pjӡ,|twoXڑ֌S-;`uL0kVj8޺1N11E_âUQrg_kj#H"G x,Xp˨Bdh\І`Ha޲5{_,tQZϔ| |58?Dޱ[χnqeDE]rA$1d&RIA+v븒z(qo 1XÁ$J /F Au, +\V?KEYP N%qZ!2&јK&I%QZ#2 e‚j]1: M=I NĒAE!C!(J0pHs_'_)/9t/nd'7mnJoww܍yꎻc35129IRs.'p /8u3AK+h)J +8"7Wv՚jo%'b:Blzuzv}BR{={"\CL$C# _ygLP7k[#GL yr8aawlrؔ(w‹6M,j4-j qycq:H AoMṟ]vݔZ6}m>3߉\}N:/֠{4p4m~Ƚ}8oL?L/nE}]~bn){3n﩮`ڪCe/jZ3HjtT/, R]%ooQ7/ԑ%vI`N`*]r9',t!*L#}`pφA5aL\ e1]X :!BI#z$~k9$IzyQ~`f=;>xlwQ3^R5ѭ;j貹 ൔѭ*zj%XF;,8d9N/8|To@C3"]AuDЍ414NF\ 0$HXid4Eg0\*dRF8 ΐM@k_$\Ai Q(嗲 (;lLT?:,h-mE5گ_ů%4x;rIpdj/ >ΡwNNgW_&@ NY+Rԁc3z9<]`/W%@[CA.)j։Yt%fعB@g]N&{9mȹ+^$Ud!IٙȠ@'BB:(x2 P9E+>\";s_E*'g4 ף1]3«O0h-Iig y+ id'p>9g']\'I'on׽MJ_~&\ÿ0?OmW&0IYΗYO^&wZ|ÛmqqTK޼y,q>R\\C[HfOso758Wvm-_B٥`}Y=P-'za|ONxt\Y~W]j5?4Y04M^;?*g7stb6(0>䛣%_wn/Z?.K_/fU}ڝ\=؟ӹ'sO_efw1Xf.ܾ]o {7J򼎇i߶h@+ժ,͑rnKOf^<έNJ!.щ.쨔>Y>g_g~\8.f}ʱ+/*hvw[oiGO'} F煯|_b c>y>kyE{}۷{%Q||lҲ]:2x;Ly׎Z"f i?Uo߿Lf'Lu2{5ߩYpTR7nU <9s?2Y͒aU}##~<՜kF*}'y0r\euK^{͏/S[uXy:oZÒO);"J%fp^Gdɨ#X)*cHF:UAVhTh @ S}p"H eu _RPuLؠI$mxxYP 9OZ|;=?.tr~cI RRT e/H)TN$N@òq5Ty+L!D R@ͨa\m֯Iw|uu3(:d= ݜ99Бlstt@aVVZAv1w(Hw>~Z9`A_`GO-Pe @ijZ PZpJ `& M!A]RBk PLY4ֻV$c(D]0lHl!$)R9< $L}1rlfΞkrW!e 귊.N}u5fϚ _(}|G,hM%L6tk}m>>y q;\R] --jM9< i^S]&٦tkmp0VirdP"he_FeE0pUCNl Qp3svÞלӪo7=Gl?G4G=l+d$SӜIxWLc&kJKj#)8kɗEU )Dl2*Xk_Vmenx>W/κZg^r_x4:9br -ǰԜߤl^ʳC _ D! ALg@';џͬt) ;"yBՅ@+&-/SaKAKY+U(H^DXVI1`h)q.sT(lZ<>oZfivGӋYxEմER>ݥ@h{ӵZ͔W,mV;>zv7?{{8VN7gv|2f=oZ=ywx=ZsP.7<]{w4j|4zKAU\} om~Yԥ=9?bu/3 7ԼdTx1h.qQ$CYac%a{'[*=]D(=о'l =Жk,:p'nÒaL[UaK|::B)kK&_3T>U8kVBtIqק yfw=lCb>! UN(ZBesKRK 돗CIm'J& ÿ}j@lB#z!F@" S6S#FQ:J8XO&{ 32a1*Vȡ0m.ȱ4ۙ)6 '^U1(d wS@'Ti\9&AS]N)2:)m [53gwƕ#׿ҏ 0mX/l6"y.5/[YҪzvK+c3}osN|M *Ρ/'M.!&3WNJ2%n?G<171>oxF7'ES|(d%Ll䫚(buJ,aNjpr=Dl9ICEzh8KƮn®I\UK_jMgEHfQ/cL{dc !(&`3lgwfkG=_# S+I!Rh{i*wu)[:!7z<{E:qn$nޛo1ڞȱ.tWC@Z:i{L||ή> Y"ƬAOP!u.&Z#`Yq1KxUeķ?]3}`:ЈڐrҨS{-X:۪/)'Yfcx6_E]_jiE^|~]GxTܭ!!̦c{hn LtOM3ǖ]gFG9:ozZt\\_});X&c킄bD$ vۆѤoemX_~ʘmf_wx9ݼLȎ}/#ecx3/RX`[待7{EaTA5X\<ۻu˒%RɈFp[db $sa 5T1AiԘ]Yx7Ok2uSf3g7{蛜9w41,y|V]&V :VԫhEPLllՐ:ΙgbPubYNW`P| ξ|!8EBRBD$") A9 >7}چ南D.OO XqґY%yمښLZMP*VcI&.yNu_W-hkOptBj%8(lݒ7&8HҋΧo=|A :Ȑj1z{MN]!a|6VU'f|zܞaCa}yW{aگ1qgz=IgR$7K8 UI$Qds6Te[6nzͶx-ny+|!i u6HLCҪCzSCL{fɂ̋PGVFTӿU\L*::+|Î&' ]|RVۃ7~}kMvu=T_7KfKIk_QS@\sTz <.mexrRub7p{ENq~^A2UkU9F2\c2P܂X䔪7r-Rz54%1OR-R?aخry<|wi]0->og+\7#agᷯW,>ߌ͟~:;x6h O*yG+Lrɏzr@ Mw1y/9=w;{dm3nϜ|hK=0ôIMͬJVkë_C?LtXC[XCGCoyA]p=ɱQIk~D&&)t?Lj?s {b4NZ1;i[B pyv'G B* $;+n8H/ϭ?_kO|f0!e9rDbWQrjj%XٺUU$MnS ) >#n*'k渔cτY{ޝo ЀG?N.Oj뭹vbLku%Ա!#5\Fݽ>?͚bcSS.$-FYg]М-8ULZ^Ǣ&gtdvw\tV qIs(,Aީn2LNZ-h :NZ+)QqNkѵSb\G.>\RMRlه Rmw & &xh3 cK(Z ;3[ ] }ùk"ŴzɪƇGs$^uZh2攭1+#mYa oaZʹca "&?c˜;˥aUC/f[["/0^7J4s̙48@FsKL+#T [ɸC@Iu)4A$,R JLhiBK"XS(c,f֜m@41ea_뵉 . :9ItE}Q ԡ*h`>;lX'4ޙ5q$ ti>Uclz:EX!߬F &A!TAFwVUVVfu~YT@J:9HX{5:GV0F8_P0V¢1!y9R@BUD &^[j‚%~䱸6ij%p1\7IH 1Vx6QO>Ryg4u3iCAUCx ~sͽӴgerXVA>Օo2J BDr Ыp(k 6@na.\z`1 `ZH ^dJ{!Ҽ?SЃE1>Dɠ8{25!lDX4tq -g~" bzYlϢ.57%Fe| f7).ă tA^bp0[@$XRC;P]%)]DD4XoQ[&j0ݦzDW<^x ΐTgES-<յF։C aYyzBĕ5eW&5flַ#Z~.PAArϧNq)@qr:i4⒕`U+ԍ^t jQ|Q--I mIh:%Z^$Fz!r5~߫wjj Pc7-%;LԳ0?L>ZJE}oK純dETy5}?LYUVPU  tNTZbkNlY.dzť`K" qj#k ʤ+h%)_/Qrh;yJt՜$ J{2%F{AJk`FSmB ($B ($B ($B ($B ($B ($B ($B ($B ($B ($B ($B%H )EN+.ɐ@F1zGF# @d $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $fI [v @ 0B'Cqɜ TI"2H}$A! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $Г'nۻIٛQI5-nfr}PhwH)NyUG@ TL3Z@\}2GJ\)KO\;=%LIb\K`p\G\p㚫{I !㘫{I)\{+jߡ*z2檈k婘+V0vH-o\ ZekTՏ?90iC-P׎۲FH[̦ԛG+/ 6ѥ_W]N*?7WЍRZ)!Bypb Y$p 7 *^,c]R0L*Pql/e_.xUn膃?R~ÖO2p|>~GA5yNC%C=v}SmʸR'4>xk.LSb1FK)LF1ƼlJJ+r6Hk-FkYe\r0 cXb]5Q_zJb/|>h5]\XORwW__av,s!p!wy}(.ViVl8=[.̪iW_֯hxGr:q[X:4%[D.ϠkΦ)2p<Δ|AH.ew 5\O16G7ܷMbWHdd&jsJI !ۂުP[ P ӽN>J 8{I֫{̎ n |-OOz.tu-^Pe^P^PźLnN{Ʌn*kX|:w?mm'ovPY`0-q~Y &-"gA3gcfħmtyA|h mJ: }N6^:R =b=\<'υ f|,8:W-t)d<M5^F);whC_.HNT':Q]LWqe\o[WۦG׋al OUs|XOiTາ.~S ߆Nn|5y=z?uE?rl#vnxr|wOWbƼW;O~ݴBm8ep,'R.#Tڍ벿1 и˷h^EA8XX~ *5_gf3W6SbbT7Jwz4< Zh(3lLڍk\~,Ή18 }὜5H)vg<*-=X|?Rx0_ 6 ~߫w=(2'QJ׉\@l&y (O#@T Uzm)-.)0>֤e󣛢,M(b93K DBsĠAK=;{M y!fMmmL;7=NʟmE/o &ځ_p$/z뢖Njf #γd۫@=įcT]mO١<؞rp፤UJ˱Y&+`2dbhdXGKRP^_q^N/{׭@/r^:tk)hdv;R2߮IB_wȾ7Noh6E]Gr}"ƞP'): NNH+<*N RXt[$Ui1?zј%km)/%cmE$e2cv3Lwՙ;hՙ̀׈hAj7'{rhTf"p.\P!ƕ/%%d O,rJUvQeY:1۫r,u7_fy?=ÏӧWGD)0*PNdBg! Z $r Qzˣ+G*o9҈ rh0*}eCT9/:gq !ϕw0ZppUJ_͵ &{E2/gMOA,8,j0@.YqO|c$eA53x<* J-O_KAE]KEtp=8%89O(e ηY[X 3NyRQ oaў#dZt]> %Խ#[@=_È ^tA=i͸2NR:iyDI0d.x&60t >UzҎU9odDZOXAޥY9ZJ"Msu)BFWr]r\i2ݔ"6b7g)!6W _`տ}/ ݶE7CKZI^Mh:,1-ᖾC:-A߶v-aobn>;jo@xM}7(-unׂh2qv<mͥCzS94}:oLj/FhSO2fdw)(RB_`IT,?=~[~qv2l=kz|u}3brCѨeZx8_6xo.&?(M~ ſ`d=.Ls*j?VSli}J `8j0kRdRgxotW3կ;|!V:mZ9#{dnuwe4^4m]'tp;+JU[[.Io+/&~יtz;0߂p\pXfl.R{M=1=LGQf-Q ,w.MkҥL.[k=n/ XoIz V1}z? k6 J*ݝoZ햂KVNs1#4TvNc|5]r,90F˯bۼaEbAVI&] n P9^ue&8l~W$bSݏo 4/f f\ZS}E>qx`q+[Y<80hK;>O]a ,VÇ8TqbU;a32no`oaf{cֹ b}@W:OKqk1ػFó#'lscw|oq}S ]?DBK"A'uFRHhAX r<ȎU"IcĶD>abH)2ZCat>\pQa̭=O?&fd/je7k:R䵎.FJ,FD^بڰ(QHDPt_*+o z\Bc&ں,EJ)לP`C^,)mWbK7gnc5 1G90.jw:Cf; +thfdX~!&fqm76\sh$Fn$~D_U,~$ִiz|ڃ,;|*z@uokUcSyi@rA4)UzBir(JL]bڽ2')l A46igƼH^ɾP~rI(*ɯR>2RuIb%f<,A%m l&H=Xgz<#^o:=.αG7l`AER+trѢl!BmxL]qëʍr~,#G2V"0jP@FKޚ@rm 3=PBeL"㶶s6ܤFO҃ksAm#t=K+D>$wYoT/P""=tJE)#$0pblj{(.,G<>@ T|Kd@V\Pk``*IևZsgmtz^ڿ]^h3?boAҩunt2> /MMFǐ@v3f0ٲ?s(3 O.Of}qo7@r+VWQ -r)z G~rA=ʗeƊPQ8"ڂ%@cI9m^1M^:T(J!K. F,y։ 0b,"go ` _nb~2Fh&QZIq9v׃#nݧe9 Ou#HكWޛl+B@K+v#W &a (>.*S GQJucz#8* wVg sr6ӓ=.W2/ׅa<oKeAjߑKz^2‹ம XuLF#n%dU3IKo%g6:tI Pricv`)0qi(Z{f*|a+x/T}gt~Q#;5(.giu !]hI PuGyD.A#èYȄrȂjU8,T@Z^u;4#>Mv~x;ÿ/v%ɋwaE8Bӳ' l]S?T=M\#&JgvQIR:)W] ,YK)gL[g};_IPߨaw0R\"_BMn;N|?A^׋z|metgcB^/ ;ߟ_ap,j7d6:}A"EQMmo[N{^-E==9̋bm?z uscf˽a*I?:$HK']>^ z zQmgg g;m7z# ' & gfBz,5ݯC}ΐ/A 2@3΄4DYm g kY>LtJ&qr~|+:]uu^Bo@goWxۃOz=ؾV-@H{%t#C״U1z+lUɪ<`@bJn"%4P*eȒ\zF4ڈ:=S7&y)#ǍGqoZx{WܨFJ?GAb l5ccxGAO ţe)a\gt~q){tQ{׃>D7ʺL;wE[@}sųafnTbr<:1Vgvf!*/`S#e,TJɚmblgpsCzFWXWOyt=߶p,=zUl6ʔȢ!Ki%)`VQcMx4؇nrMrXX;JE'۠z}a:)"hv"Ę"d,!A8$$:|TX#* m6EOςvlDHd5 Q)D4jREXt 5){mg٨XUMe!#k@pKjL$cc;k&llP:E˛RY$$ZQ E%8RB[@2'(ak-MͧsG?K2OUy$cD VC[emzjam,JG[I@S˔FY<}{ b:)d%κLC$"xQKبf:o-C#LZPZT"W\VYK&5kլ$PF[a.ΈAoLf*X3oAuyydX)6(X׬׌Q`& u}NEy44e0Qic69.^Ě I?4y>>HTɂ%|<4',c74'/mD@(z,Ucdmh88oPrWD&ҨJ11]$(%h)5C8m)ĄIm-"RdTNatdb/gy"c=En78/'G(B{w[pl\݌ͬ/G S;SȊY}h>w!6!N8?w6p~+j}&*Pm6/Mռ{q!A7Y"v֗uwWwܱ=n$d3PޗwM6Ejk!zgߵ+[YT"⪨o/mgQQ:go-vD uu[~~D)tʗuh|+oD.Ȃ#)ٷ'$PBN5J&b\'K/'ͅWvF4ʗPT4/Yj[ 뚇Zɭ?SmWoW-È^P_'r/NƯUэG]cq~1>UCpժyh6|6^{"W>%TUHł^u-2{BXrGv+,m8k~Mvc!=f߲k×e=jv^vzFLQ.m+^4TJXZ ,EUՅ"G5ҙVu)dom"쭆J,d.5zS\L ިF$c ͋4H~уjx6l5<ƛ{D؝lK9VuŻjqnvESPY_+rTcMɧEK6MaS8v@K]TwmE=MUQl<ؐ6z㘷@sTk, yUecQҿWªud4X2ҕ{MN}r_?o_6;FA٧,xD#gI~v }@uliǢaovw~fg.l?+N@6ۦ}mkzc\=0.#Ml5,J9{?~+O`l5FEΝ0.ǦPydҿdl=ؒ>H*1|!Jhd(u]S*ڪ /C45gɨTጴ=n[eF۴EkkFh7%hlHl 'vHLRyH[:7de}̹ׄa=gerIlO/.|! r|yy#j5u[j+X(Jյ"u'+-_[\[a$&h͎3yM7tDYkY =UZkM,YkI"RBZ;¬FyyrwL=[,S ^4xy .sdW*9" i@UV$3%U'%U%UA$mYT[3JFʗ,uuE zI+*u8Vi R)]TM-ZKZ[hu%E%x\ʲGZcjO==OiCJ]+{bdr?ўݡÜ~XwO6WWnꦻЈ:&"W@A~U!bB2ON2+1ꜮW˫=+7\X폾zm{_'e޹vG|֟>~2̇=Swկܸ.e5'M-#@7Ms]6}<5/9?~KS[37h@ǴZMѬV\-cYf΄ZM*izs>"\`om4b^Ă+ "t\JF+,c]Z윁Nj_s4 E.5bލ_~+oگ'=/=|}Ϗ=G'|}&XW]uB~c"{.?˾\l_& ^=W5S$X8k~5`mST[kEwَrv|W+$ۜyC@ 7%[>p`$W?e? @O,p.7NIz6旫7jv{}T]mh~vP7d?-VE_v_jWLO[l狝VZMk ?d5 %42s\ Tz+}'J,dbZ*U:2F",tu*\Z.t\J\MhU/Zm]kvۿRڰpz%\[`iHxprb 8|H!ScD"b4"XTZt W#v1K#˵.\Z%lbi28J\}DBO* X޺"p5B\i5płW,׉XpEjW0j2 # h+R+Wrk oxp儂T\fkehpj WҥQ *dzLc?]MN2*r'Pto@A&; d:ty}`yR%vmQtC^(fu9}*fa,u jܟ9"^5 "V\_]R.<1Tv?0[604-ͺ 堞;]"r"9?wOདྷzkՈg /Pg.k~ίk`*ke|v:߉8+_[ UYldJԹ9YK,Z,lIY+7mSoE*s,}Q9o,ںI5kb}B1pljY :]@9V.=I˲BY="!Sjn Era@6?Uf*D{R=}x0BV%`qxVH[UbcUU' 8k/vhWԚ·Si;}{'\[H*QH!$X0"\Ip\ W,XpEjQo]J W#ĕX6"\`0\\ƺbF+Vi]qR XXpjWMqVD+-vʂqr]4Ubƈ+W,ثhpErXpj1x\J\cĕUZʘpłW,C,"J?dJ%\WNsT\`XW,WEbB Wĕָ0g`'` D@˽/G᥋,8IGUjU8XƄo(zʨl8z}Q'×V ҆+jym1Uhk XXpjWp5B\v3# ˍWqE*e`^WJ+1MY5z F[WR `D"VbZ:XO#f ۡ뢙 Z'}b[˄G+lW,D3dބ+R Lj+k1YW$!AXW֊q*M!8EVP',!b0W.K mRScNS*3d%DK~5 [auSƓo1rMڧ/OگH p U߀t:/nï3>Ϫ3`ROQK )6Ͳ\L{lHpL.XXܪֈJܪctzk%؈pł}<[IhY-q* Ap\UvW ^8]O#VJ |$\[`-O'@3{Dp5B\I焊ɺb.\\ H- ud]Wʃ*"\qiE X W2\z W/+mKx+;tl~jJS W#ĕCL 傏WwJoF++@&,hprWV]ƈ+M_vSqEUXW+V|Wĕ lu(*T >b," zܻr{a!1LZώUv0-z9U/8^r+Z7P^* ,b+pulho0"\Q(&\Z:XJ7E+l F+t,"0t\ʭ| W(]LAW,w "?d WJX`%!\\'ҋJ+: 6JE+;~j WBƈ+r1YW$W,7W;Q/Z$j+*h˵ld ~b2md#8VгV#"w%@EiW;[#Y.tθ^]Qb)zU:Ug)zU)\4)z -Tz F%bx0Xs>&{Sl䖃-tR 2S)k#7{r{b`Ϧc!]|p*]:#`ZXj"h|o:%XpK-HS)zJ%\[41+:G?N+RB%\WR`*\\ec.t\J W#ĕ2ª+]4"L,b(C4 WclL >T?՚}WJ# `ˍg2j Wҹ•Bx|W,WXpe8LDL^V=25zn)/v/who֢noN.P007E7f"vFIznvT .t^nj./e⢹W7<|] oɕ+A`T."f l6F@:^I)æd+jIQZ6=.Vz3/4xxk rOVDŚkkB$ )><+?g9rZ//'ëmXr--?^MpyQjcZH3{N{bH9A_HIEmq-z3<؟ڌp=^yM)2Q"j/ߐSt{ 5oseۦ|t5euW Xɻ]~az?|4U?S?__5dZ=8,k՛?ViT5q!;rFz#=eHw~.ڒZ:ڹWW]5)bv9oW8U-Ukl*(wҧO~*?؉dM5T#j^Kk m݌OGdƖ|&͔>;A~oWPL2I-t󡷹ezY 0WobmA(j A!-_$8Gc lYOcҌ g㩔_SnD8yoz*P Mapk/{8p_s5#(?cVwl1ܬ_Fq#r4_]h[!.ٟ,#%كHV]}t:hzY  A9G&2͖RMG T⯯UUմժg=*ѣ yߜ+Kv8<%.SҴ^.qC-To{T<Ŝ3 ˍ:moDzoma @¬JZz~Yg{q+KT7D%LJJ *7v]ޕ7rl.%ԯmR}|R+>cgSbKL\ gON&.ᐪ|uaK07z4Ŀgrᥟ^,yH .V,nשEoGoْ}<؛gQ#JUHj>rp c%pI*$ Y)f:DPpȅ`IZ6HPVq<'t̲;3+["=aP7)Qrp gZq9kR$,KJSxRJAZH s6G 5Z5 BmQꉟ& +cC6ȹ 9 FXĬ&{J:,+1aŘc myy!SEh\P~DqƳ Y(ҡ1cRMHݛc6{5/i=mhJ:g߻?;$TֺWU0;J&uH/X{{c{1($;FUci)o{Sw.FUŶAe'Ro$b{K(K+XrYdqVV2(yo~ [?}9kloNU3[rs&x{:D` qQ) <{V6@4ٸX+u>#JykkAH.#_xIIƁO{۽cazh҇ <"Iww-X{4+ 50j2&4jYSL$#Gd#({A)dI1$4NK( ",z`]]V.PAZeacS.C>Rr&s'!$}*]h:ڙ9B!qS7[^aeH!GDJ(+<&8#PIe3c#8/\!;n 'Ӂk"l%}V|JE Q([:a6v=fؕ9'PRԡ? c}6觮Ond\-x?.ɳN'7QhS}Uq wi}kk{;r*dfڛ3p *?`Exrrp[II8s*}/SB))2jkrJt몭MYrAЃFL.R `\KϹ Fٍq+ c,#>).-*6*tImwM& n?wV+.$X`6# =2&$ɨH'!"f SĖVfb(b'$i\) lJm#dK&T0B3v؝h$桠vgޱ/jQ[Q`xØK%΂[%Ia\ZDnYL(QnDŽ@R 9š,jFMˆ̇U"SE̜x؊jOXP3uLjxD{W]&h ih,)N>uJ)@•v:Hd@3!h5jn$\Fz!z߄ٍ.:.6<3/MǸhx<4UIZU$ =J&r$1 xOq)pP3mxh'ۛjn)^mbkNz= 8 ~nXf?M 58i[7`cd.xmg<8+J;>|Gs-3|G.f=)*P#㎹cfY2>ݰ4y/e/\ {88ILyR%B+ڃN`']f|H쓜^m}n/K e#m\p0Snm`U^o$W_U:~kY;q-ԆXCNP7K/-8dzv: ^mPޞ|g(G|抳0: I-!0xh}vRgLcYjgL*vk:ҖZ~8xD5%d3Ij+-;dٗݏyN2aYC[&|$^e3k6x`'P<zŻ߷K6}]It9niFsX}~@wKz_:0:׬6RHBZC ˑD}&IGA>sh2~8e7!HƜsRD{7X"@&Hfޒr$N6:cQw1&&%P;e a6]O8̜,&' =$j;9ڗ~&˷Hѵ6qTSNJIGH-eQGd b"3 xByn.@0VR {FiL,rV忿>٩$|Dc)oj>CBȐ~˩$HF,JtBz1NLQe!S& ',1=RJ8:3qտ~a:>GۇrZ瑁4@Pxc=[*pS<)5LAct׿pKQ;3AR$j]$e^uFx츒LV8˙>GՊQ'ug<~1CPON(amIE(ȃ5콤x9R3JQ PG8j3Gx7~H3kmI_C/rM8 k ikG4l'wO5Iɒmʲzw,Yl5QUb=R S& dt^ qF uiHf{Hn"cG|)q&\o$/ɞo){N]b2)1&gy&#QGI6T©cO#{QeB3wz]qw36)&2;p穁DY:$ :}&(])7e$RZm\ϞP( [JMsT/!tqe08y&E (]pA(8KXJ;gEbcTDr#\pIw~#"t ̈́RH#…Ac5(Cd(f`֝DXP\;B"1X`V{C"&$o9/㩩)9Ԧ!ɏimʓœ} OTZ6{|wXGИuIד%[_.⢘Y1yD]Z{SnP5>| (~o~VƳJܟI~P!"RbHcj)m0J('dFN.rb*^I$M^[jIPGG ٘ *;ܭ,pjˑ{ \T[3Y^63[GuXk/ζ tfZ\p56>?T#7Ҡi<{Vd_5.`+;Cp5b21@9O=f&;~ieM6GwᘮF]Q͠kn+&dR~@\*π4J<'7މPųAD*A9}KQIe")8\ ] ǓSA"d6X!QB7ZjNbQF(b# qرPᛔ]g%jE/} 󠟿~Jknn69% tfͫmr6X*0AI /3> :)T:U04IsLy~}/W޹8677Kڻd!EfڪRee9-dR$~:Vؾ쨽ߗ1?" [fF9[i[vq-΃ocGH)QSb$3:T].!+(kXRX5xB]m}̼|רǧ۝u *gѤ]EJZXZF OL3PzTUDŽ~3l(| PRj"SXC#w&@.SMysn-9Xq@[DsLPHrkr-+ "7:Xkʨ`j Y:eAG)$IC"t =$ ,| qR,(| 1oD\8|ƥE#T `w  7"/2Ј:*!@=$eeZSG8aI(!qD GDL_3&]G!"PԷf7rpYpbA♁F m\+6Jm1RH➣:&Q(Eh03q/dZm6v̴8~\;}ğ r`/F"3mv~p~Ze_),m¥8/B/|1Y/7dsLW_ n{|ޖ촾i|L0(Iu=:4!'fq@jL)c4Tyy[^9d^dsnTq ,=Ѧ9LMB=hy^Wk]"Z_.$E-~~2N+Y7[pDgY5*5+rݓim'P}†LQmt)(kP$vs$@ɕĐD'*l/8f<(3JI~ t#X2::T<4XĉY7ypRhZP)gvSZilT.2."Iu",&po'/3ة_0^evNޡ]xꈇp?2lO:A"hn@@EŴǢ<x?L>ʠmg&LJdÈ2FH TnWRoemѯ˗[b9"Э͎惘w낏S㘣*Ϣ0H a1 E7knD+<,M >.I4լw` cAmɅ2.UХ1[^Y4JI4-ts,xUSU"n0兌gdxgN*զ;^Ӡ廋rc $!DF&J#BJCZ3Е=cIMgj:#Z) \YHx+?gxچdAPΔ ܿ/Aq%Z\i6%Ɵ~x"jzf^2m|-/ y|^]I̵rs!/Le .\e H %XE.m;-o}Yo!?E^.H?w&=e:(20qߠ$^w9&N.~᭍w=6Hq'2Olh5-;bx!ex%ПF])aV0Zm7E9oP]X|P Slnj 8kP. StD]h" xvFj+#qXqMOq>yX꺠LsH3fR)!qNCőKS}dqpXTNQ|x)P~HTM]O0nI3BV_-= noz,ݽl,d1ZyCBM \""֬fDx7CFNd}uLYok`㒢jVu,/=hqnZ@Dr%Qp9J>y(]4o$Lo "nuS9?lT;h SzmgdI8HAOhȁt 0 wP"c--}qNеmҶ׭fEG ;0#+\&'=HAHO)ur#pҵX<] *!%Qfch܎lYz>NfF;LA_UlpT_4Ӥv2 ;5S7jYLǏS*c51}yJ%VAI}r 0 ȚUf?y]vQ;dtymlp2Pl+0 ' Ҧx$ZD[%N9-TP%bRi-FU0xpVE]W"XrK ^"K (nˢC=|2ZɄ3i|YYhYhQʹoID+F'j[ `<0@d*;,zL?ZAKGP-щ>CNF稃~[Nz3ذSoZ9wRQA.C[rM_nY^,x |?.JnzfYXF1_J+aʋhE(,2X8U)z4ڤ Or+`1MX,p!S޸IQni%zyXZAF<{fJi-_f ~>jC#3vZVWr;Ah;̀h: 1Yh9NDu| 9 iQlCmqOD>^(*%'Q6TBOmycx,A(V݌ﲥ7XB8aQ&uFRL'~OBZzm*|fU~0AP[ (%'KA1 )dhC:`cxKӬcQY# Q(!oX`4iXwؔc٬ar(;A۬!C2ː裱Y["fZ! 4T[QPh~añ8m8dk՜y`xÞnj9{#(~Xwz-C>vF<;l_J`W@mv{S݅ k$Hfx/, t٨—-%6&:pAH?Sc,O{s0i\ÙR/:[k pGX#v@hC'1;e׻매Ѽ;+!*yh3҃8p@="%\;bT~àހ lj޵g/B zVnp^Y轗m}]V-SyWjiIjԆc/!+՝R~^f .mW}CW}?Oy2y7?iz] G \&+% E@#C[Pt#ުHɗuWf!{>j3,dl4cNk~@. w^Y}U @V3{J%z !,Ncbb4(c@ǩ*f2fկJ)bT~!i4.ų2&陸XA/u?k<v&Rוh-i A0^l&4;î=B>X!Zc4OJBϱIOJ*y!T1^4ZDZ2Qi8s|OSS,'Hp!bY̮Q<QFlD}^Q0D:^USA$&D-* !?Q>,s HvpY0nD[9{65Q~w$_L!N }PPUhNPHrj$ Uc $N}G6aZ"kL|:Éd84BG91>s L:;V?^xFUֻ RM)  Bro~f9M_ܨPo5 T*>A vY)ołY=kܧTFt^X]5|Wv4*|Ppi#v:xZ#.#J}3Ϧn-+yp0zA+pڃ$׎LL#a^~Vm._n*Q䞲A_3 pxIas:NMWi^h+ͻBJk_$݇ FDNY<<6 !6jvRE3jeg;0A vꎲ@L8\GP; h|]&k!َ#;ZwĎ+_s`IcPgι9Z|N>_\zH%̞'nI#㷫9 tҀhjBO(X| r i{շOn)3U^ &=lĕ=JXC1zoBPv!D^]X߁9{_=,I8ǔ4MFn3Q@19n1_CTh(s'Q(W"kuԞk7(\9LtxƉ^(=Pq#tօvZUcuNu8b!gi2<0p6l D !>\)y";kLmO0bg4tnSX,1cLuccXd%Ap͂v6 PG/ӲNgCl.{'@y\-ڈ_zUUWN##/kXSgsEH31 i yaȳEɁ ŕz %xϹF7@n|*H(2Bp ĵxjnY{DW1 Br !/-.)1RTv*11l^~~V!.CY.ozcȄ $CQهVqV-AjV?^OċV9q,ot F1O B)6#Ǐ_w&:TIf\ R#.2HVŚ?^ǃcw5rۤS*M@$A(2ؼl>,MS=hvt[jn f3}׀i~u L@7$ ׁdy`sۿUr3ֶEpj+h 2;0PQB{.;%^XFS> ~X0Pg}'upJvU ZO |惯;'`u?GFSqwtm>Y;p4B;jv:z<6+qyX \$#/`0M :w~#A.)*nBNJUOi:O+1gkUZ\eU*qJKD>;pm5nVS2};>*wũuBvUЭ%;Fm5Y<1Puko M NylYݱsob]ku~,v/98u uPy =Z=\0S2d-))%GR"i^_}݋ӛ48S >ld۳=y]ɮs̓`E++dL!jy~*&_{d`ph_XsPR}\F(_H_&g.Dzg?H}~~sufQuqj;>0P 3 >oj P4e RnVS zsJ hbH.2T_yuh@cf  ʝi7ۗ/XcLX5g[_ T"ZR~t^ Jk$1.ebSc8zY!Pj{l_xu~Θݯ-8y0w@hH?3L@ai$_hyE(\~0Ͳk_.%tz"Nm4 2Mr·Z n8pHIp>5k4hA듚gƎ5[bt-J_ ԹFo)MKJv~S5Hܧe)-]zzllbj%4jaꋻ"ÈsfdR*HTVEHQlY ?ua.D^fJ:@Tf$EbXm}MJSs(ͮ>}IN?\ 9mN(Fy_Fg >Z廥E98vuDKϞ#sIӋ )sx"M&TCbV:IB$&4~X渥(Wʉ6ƥG:1$8svcؿMwD6QwHR`,qJJ`P`а  U$=)nrf Y @XC*?i^{ָ-V f} t4]w|&:DZ[o\SmoI.mބql%~z9{8]1ۅVtG 諲zFjua줗UrS'D倶`s?>9RTdWuMgB ~pEm 5,itVgg,>e2ܡ> J5o:ph;GB!cluJ#ŔKΧӅs{o gy7}ZK8N^Tfuv~P.2< A`v0͖ ߵIGQi8OJC >;e"1{KڨOp}\Qarm\#'R>$#Eku [HbIU_V\AYy>VHv\ a+ZAX]ȢFǃ-LI?H؅._yT&juܫBR*ev0qn}oUU`F[ow.QE餸5lQ޺A>}Z5X˚j =ĉ]cJ*;BYrQ RT[1 j>SHpw#sto,BՕ6PVi&V6ow+6 aFϑv8{%fob(?_V/EG3E+],tZz۽H‚V RcTm?tqbS@3H\C3f($񰽺Z,SZip8ҫRWmAB̕&|Ž1ёl=&}'n$; %W@0 5{SPKQQBi3!2gRUJf!pL( :i|"j`"XB*ȁDBŠ(d)7~ ̀Pf,LL/7EfAUF4@shˈ<ήL)@0[L  ChT Ω8g SAߐ@o"WpThRmXlNt, b0b 4E@mWQ *dbQrKschc[*ECl*U d2C&>(pJuP@YfP n2#9πrxW/Ep+KogD‡`0 ˸7#g90qJy3>|O_j15g?!O4|~J_|\zBbe6\ĝ#BO*d.8>Dc6aFH")EA!4! S`z_aƾWrX$v&l7 <:V45KJ2 0˲aVZx$Fǖ2^kPʙFLY2tOke[%܋dBNcWF Cʻi󒄫H/-`" 2 sb3)$ya]Iouh0=atr-l  Z0`'=A(Ĭxy`y-< a+"u]I\jAHIidn4[r6Svwݠ  :ZRV|_')L[Yq^u<{K&`GoMD;@np҆+e%WxH)P!P2XWJt.Tȸ 5T7Y7ӫtT823)GUS~+d|u׊OwA m;{;K,X22?(ڱ#xן~z9x?ݿotUP"aqc'==G?u?^Qhħp`۷ ׽k,ev P {*SVZt}0a>FU4Bxo:g9Y@ZD0B@agaz{ IC5AB86\Ooͺuõ88!;4 jAܬ>Z(h"h6?eݱS"AH)JA$RCpc Ne䝶+m8r(ʢVXCC@q%Y8`J[sVj:rP@af >/%`.F%W7n*X*fL)OE?JȄSrekD-iGF{03 !ǁ6B$ 9l2nbBn(.̥Se\VNU )kt-8d{Q7n0Kodf" 6q-؃e(W q̆m뢴Sv$D]:~y&Zy ( hzW :MUoY\ڠP%8`ǬRD 0BZLyrV0 l}?G2RQ 3\9^kNRl 5*no; P➄M`4 &c*ް­^ՒUvMK2..өOaT Hup:+ *2JS"*y\lqc9#eBJ'_]+̓{ x1!yLOe.2#D'M 0SI-HʇWTȼkT>IDxQ!0Rgݜd:R5#{B%Y8WE!Xz6cRKFOsZ0Y J+ 7O 9@Beގ[kL Ʃk(INxॼQUxڱ //1WRk'֖LV]沃enxz4* #Na~I;Iliη@0 =$d?z{Q>s1蛉12_ȯ>b ]shXes1Rr/0e}L}neZ"h^-XP+)YiA 2Τr \ϞI)}80x_/AI6za<ꨧ3,&ٔ $<-ꚕ*?Q]x ݑ) _䞓T|X'Uɓ  u%Y՜EqЖF$EW˸`:!՝7IE2.(%D[I /Uv..(<% nn5(GJI\Z05 P!| fJRj Ls14AN~&*hy1nk^-FⲓJ˾$i 6ȕ^h%CLٖ'I_I.qGaH ɠF>`K*d\}!g}`dL]o=+!w8 ]ćg$$WVܒu !]ޓ7 ۽dVw`~<.]'!XnNw;Q3wP&Ɩ9(/3C%fJAjQBFLsDZ]ujIL>!Svy_t5_-kb_}Qpq|IN;j|9fP>/@Ƒcבx(.m]Bqchw>VUP+$_H {6B|0̉``/L?[)lG=zGz:{# .(,_kpfvFr_P9b}tˌsia>-"% -^a&Hx8uH?r8Nm$lǭA[CP>3S#p{+L G*((y\xo07GEWI.6ws4MQP:,a%02Vq[? E(%bFf8(n mA*1ͿB 0am3}y ZW jR5Lpg?{L3K41mد'6.Asb4)fJ3enLL#jl+x;F:9'r2j֡ ':X9 Ha.a?T.l6j0WnĊ)*k4CP kg&]A2."\udC$#0B&)[v┻;N]GG2\6\{ @ED@H9(i *T`q:Fӗ7Eds]?9G(;8PŢʋ4КA gL0QM\W x WlZNH:)%JH37m8ٍ9gQvwvg' WVX!άPT)IvޕXqv&8pe$6i"}o=_r;`pFg*)$E=a}ֱ&Q1Jt~7+TP6|)NLx lwُQed"JYV75fqGWSǰbWsDgV$*3T/UYq;&Mtq[agqu#w cmBjXaeH ajϺQ831A{_M8@E\E|\P =r4`y4->#wҬˣvK^p'-zG\̉$mY<Ì S!ә1B=qI6s/eu SL!6;D,M!G=K&/qdA"|9*: {Lӡyol 1`Ll8B L`nvl}L$l8FߖU)DbO9CN q5"4?yYۃw8TK 4M쭐B㋉tIֽ ̾CbJGw.dBu0CYK#Ή&V2avm/I@ONgX4黥13&z0H.IdHp_5`DՓj-%iR_:a XQj,IL,!wdϦvŰ|6EiR6F?~7FpE*j'sapPVȸ:/b@=2@ZpTȸv܈Pu V&R;Cwb4m˾=;%VpΨ,wuotz;uT[|1N L{9ԥ H iQYx(sDV*d\#YD䫏)iBF'%b>znAISO.vJ.FpL]r.*DG3[%vuxx: Q*BPgS#T"T(bdۛZ84L^MM;;a 4ROgd6Qu#eXRxIf&F*\R ,e|(v@g(J\V@R1ۗoE3w#"Tg =;oq1բ fť&1yS5ֽ_Tm51jbRj\Nk\]n(xWOuUfk2kru[qWN 7 h8ؾ!jNP@J;v[I D&b9E[XvYk&HBm%c!{b45qR&.|k!5 -^BoS }޼5PYcA(tPu~Y-O G s bl`//s&/IKH#QF88V.g!CgpQ$Ev*,3# bʈŃ|,gYd!>m<vt6O8١T"LjU*JIg?;a;%y^ſ Ə}Z揥GMi77׵.jsYw ~LcJ4_|rS7NO1,lMc!` X!bp5e>?PN@uϝ%4#0B- DOG#\nW۬5Nh<dDkbؐ0\s 5]ܺB~Q*yEEh>~Hs.RJ5S<2XW@Kݑr_!KcXME$z4q;A>=y+qqZ{OL-PtJ}]MOu8^2xi#B:%j"dm ,mѹj'!1өRC(WkÕHPz7/ INQ1D\N;4A8#TX3CQ&Ơ3EXtȲtcMbnVRuCAI@a&?]cBm&ʝXđ20;Rx$>&AXsIY*hX,e|a\3%EN!TOJd_^EWl!Gl- E@{ԍ5\c't~w˩-wKuIҐ5||NggkJԈڰ{/6 eUx|2l)NfT$K##]?Ef*[谜O]vo)^XZ쇜 vu&%Mfw0P\sa[_ fB m^/4]T RT""#mRR 1:gP 7)!q7Ղ M |Ήi|tɹԙ6w[۴z&4ė߻#Db`ђmЗÑb!qS>- _m<͡~ 0IGdZ!ˊjQ*x=0Ҡ (:c#_g9|GtuHՅ:=.Zd<ܓ`:S|=ݻ?NWRK$46CqFHGQa[0PjR&r:O}jGC{neeR_y1ͿAwb4lA DyN@eKDyw'N t Pf *4;'seh1G9qg2`eI}sS`ɛىG~Sxanri)8*xB%8AZݵ6M5MK|.]B7;=_6~SM=}^(}2jxޯfX>S`0M>)&9y>y?85N]tZzTUBv95WZ&NGPȖ=ʭyAeK7nAOL1|A74^4PN w_҂W|r͹]rsG9<(t2ǰWS^ Y4>ݾנBra sa:kFЪ"n*NPU5>N4TRP6Fg f2h.ډ`o{U}e?L=u A꯭i]ϓɜHִ!BS$Y{OCEdӱ}&+e | aol " q1=\7/x),!E={kP'tarZ1+>O/?l<ߢp1ُ}df##I 5&m4|8Z1`I<~[l)ಃ8]~Kͅu8O/!O&pd&_gN+d|~)ߴMtQ32gp+`G. Fubu]V2M"PԕwmRdRht4{{(+2R}3\gQ>)vQa,4 p I c*dkK!B;o=/;z\dU%~cZ L*{iQZQxDaWbkB;XŃn$!cJDsceYpwȯuڬk ;R~7[<4q"IhDX!&ULY\]c ch">Yr Pb7>Yl'ok“]V#F+bWN?Vy)F6/b%&ÁRZ7{Qj^ANyfykPn>''d jrk4!@X QiRNIۊ LLE$&C,冏(]Y7AХޜli4K||TRH4s[LFlގ a@Hb|8Fp1:3dFjmU(\#\m0zq'cXK]ckK[ l!'In{>a"5GLA I*tQ_ɸBemG[293^\|Z:}xKߍ;gV^'QH"87c9/\rn#pڑ6\W2^DjrÎ|}_+my6O2r cqW2ݨ2tnN~f;ڤ 8܍oj!&q>UV!Xr#>^5/VxY/Pm\0~.qc8cܘhEZMF ^#+5fH9ƍo`=ʽҍ/{>yZf=zG#3rCd n'  ӣuBxM畼<k ~1W}!>lqYN ocvJ`}ʉCʺ&Q)&%Vϱ0GQZ(גE)@gA^EhF'Ch5qvO HY]rZt[pP+ `Z `Rq]dmKLIA )l b1d8B 1vn߰n)FUVp j1^%-ڙ [0kk,Xْ vNwۮug]Tv_!-.[,zq۫ R^}g  Seg4,ύ[1f}e qߟ\5XaSѕ^vW|Tkm/ -(Qe>=X=IܠXqvv~yڼ׃,}߆LK&-Ei&+&^7XC!ƤPCʹz]򖊎>i+. !7Kk2 cɵTX&Ise5T-}MD$SNO*2K _%)S颫.] $OVo _o A 9 {7=0QG:= .uтCl^ _rP)S; یbB#gsis2ȸz"JC DF4K9-Tr#x̱WPݤ˦޻4(C]q:;՜T[ 8HJ%L-ZepKyeN)Zc$'l.ZtcW1MjϽiJF?ؽ,XLj1b1zi?W#ZkFv'NO#nݹkbIX.gk\)bbAI]yPh T/$L#$ꂢDolUQFn^&̪XW@bN3'cBF>ݱTl4R鰶[  B`Ŷaiv#zU/W8{9$QU}Xj 9ڵmq Mdz2!* P[XdKET&C/= /΄4d*|`O\c&+Xe$bls. C8~Զ#&)7Yr6'PjcZowB7+^O];Ͻ0!Q-k}2>j@zu1e/ϥqZV^e5%nbLvm5 ܋ 9'nΏvAJBAW)*\4],@P:Ͷ"SZE{:^?h{[oca*`(hKay̥\朐 }<;;΃?y3"1s H빯ΊVs=uFav㬗Z7AB֑tN\s* qКAY"tډhM7*h*^iqD݈ﮁ ZEВ"#q2&\Zk$y u7,ogM]{!yhgvO߯cF{laG!}|Ziksabc{E+·M-iqmdb FuU`"뼴XRF:GLZp`v\p8g?6/]'oLތAb"˒Fh+%Zqs}fբhF;Be]_;ną om^gRVi7!cU+7xu{t731t t3 I1ih.ه{]k*MZI5Yugj2+#`#eK%2bXCU)qN߁m>`,7deDvD4; q'!g!?d\t"_F\(ke]{pOX=J+;wKoц]tM kҍdKFZCsn< s;-9-j5NFl2?'sAוSZҟ<ү>w=ȓCwH<He,`~D/|SٖZ?տ'g;y2^er:q/Wu7,~g.fJ;o|=g h2?&Ӡk`3%5ə21~b޵Ʊ_16Q )4?08`^fkDZ|,u|iNĽJEL5 K֤EL"Tk@njݛt%dNKL/ZXO75v;c_q\@-q}BAyWr)㳅Vl*T u]KE]*͓{N{)-$,Xl083Htq9G!hsAǛOzKφȪBRW)KQZtl-5ϑٍ&}ЙrꬣM-[!̔\/wy]fFej2YTX 7 rO$wW9]BX]+9Z6()Lv B5FX@bELFvCJPwȆ 07NxGh#)W6.^u؋VefutykqeC6s MoXXp.a&1VI;@ˋ)mtL"^VQҔ]YDvwSɁ(:d%^b/5ׅ;*+e_C||"Fo2 )Ug5?YjE+.() Ѫˈ<ֳ|J "-Wx`ǩdc"pX BWzsx Og>:qz]2tQ*_xQ!C \dVjLcRj^Mvcy'b٥0+7 :HڞrO<ɱt`a(hXdXMze 2K,.h!Ӱ3"f-הP!1L0vk;K`-fSI%r.3eD/])K̊LA^R:ݯF =io/v} +j7$0Z}L>P.(vW / WWG%Ǘ6xpS o㿳aXˎG3Bn{Y*xs 6׳8v٧˷/./.ښZ6Me 8\=˛_ M76I$/q1gvI#(Y܇fjoʓ?fi``ו򛨣>ܬWWeڭβ>LזtM-dOkykpC/gfD^*K)`ݔy3P>E7&7N|n uLnK*5 :/h B^2ί̄]\9!{7m)t tryG@תvc7Vq.n/x"-@.惝CiD6:7>F4 Nf8TCwr/c)JxdNS7[mVRsp%%xm򦫣t #Us ]zR>yrsS3^qBfC3hh$?3qb0 c=v~|>8pxuXppɞ%¾>2e!%SY}>99{؏მ{~|`2["@dx[|p+ $"|`5}үYd!jyIuK@͚ _-<NuKGLtegWw<9"]}xW ݳ„4\9ǣM;Q9?:=zjEH__۰JB[InziqY{.9FY\76GD!!nqJ3X)רEtOt>3U廲>s߾;lA3`ɫʍ*9=zd빨}L`_'q/Z$5pαb^N6Gel޷ycϚɇ}w۞ޔ%7Z\: +:Ĵu݇ [kpp=(g2~_ռ1}qԗ=I><1s.h!9 ymoX7{hC%A;7&GY+Fkg~ vG=If9T8{-r`朘_ vn}!6ݳŕ2N$pUݛ\k۾_)vɺ7 .Z['ǧxe2r&bZY+[kM\l^{emINjʜ/w_Gm\ lLR7O78*Ni_y"S w^fWBT`.4dIF?k(EB)8;Ʊ)CLm\zgFlbB|A և yt.B[(޺_(lI &%;YRo%|/C=0Ux?O]p1֚qCߴ2@Y5ZV)hUI!Dl9D{fW ctXd(rcG*m 0nW0&e}Lx"k1 PIFc.vtqJ wx9t>ԪG}j_SI{`[Y]uNJr=!hSk$u>ŔMKlUI/[ְXw~Kk!)FE!OTehr-3`gE)U k WRP.>؋Elӑ({ =W--Y o l/|zw|Sbה($WJ^ f'U,64 x8iSh𤔵Ì|ԨWk 5o ]#ܹjY:e6i 3VfZkJYkJ 0ySv6- !&a"| rj1; IWaOIj-oش(Q?{׶Gceb_7& 3okc[=jY*W%e)TAnѠPH8uIv4Շ ?B:`#oOm8^'y׫.0W8RAfdb?{17wIӈޑ 2ȧvA`VP5Y! UBLTYS>f`#,9lG\[&ۉIRMMAlpf{K:i^Rc(نz( &k Ab3+JUf;+[}IaJ,ឝܝ1a`$J,X#5#9)@51z߳I Pd.O\ ;g-|e4A)˛#o+;kM)O&8ZiW醘HV;<,d]k &:햺vQԍ]y$GUÑ i>HvE%f"a X'#8T@^x8 MƀK"7ΜtMɅkbkR \"@oAp+ky=nUh4205(U/PEsS# +P11je!m=GL"*>"*sƮ]opHh6?۵n8f ƿ8$Op]C>!A~DNU..ނ%۹OH&ZTMe<=öqRS3eV6.㑚mn[*|c-+'ϊAO)+6ۼ4K6Qeˊ۲⎈>MOJQ8RǬTG3fIF}S ټMCL#DY&=%PJQsj^&6LXM#~{f'Q!ii1pvh&Ef[hr`@BPo+>JYی]aZxf-?5.2bNJȮbӮ8|'UU$m/CB#Y`tNM5R3T?P(,jCa_OƆ4Kb&3J 0Fٌf_IkpZ%S3!~&wɱvɱ<aaN"\@(L,Ψ皒 ל6Y]+XSmެ:8ُG_y %ɦuӉ?ݰkM6t@y~ QfD4_h'b2-ukݗVȹAjƿmBv " lduQklz, AasSlA%v0Kp |mc)krEB4ʗNc뮛n p|Ry igV_fWʥ\3{,BfvwJaT- 10Ozb_;ȭ5#bdyxݣwSźwmo4jѫEߦ=*s9Iqk߳|۳b?%}yE/ >z 4upY,FQ z}uVѐ*DfrS&PrФnj\AV9 l t?PCDuL!A6pK;".GMw59 $U\40fvuhU24ᴈcZ45J(4 SD(# 3 zF %U봾3;_8UwqxFp3t9!}_n,}~.ԱceTY@'~(3˞u4Vv{`BG0)I5fn7; [4O9DQASi5jM6Nk[sUUk)w@F*[m]f.L좏&y*$S^mw">/rGғ#\Wzn%wM҃џ3Q 5TUDԆ}:=[h w;]W*hL =-E["FAF}ZAy^e:[n9xDQL]|ue(#U>{[VY|H߫׋\NӱD<ּԏ|Jz;'R]7 W{Ul\19_?EH5OE|^v\C\=tB?W+D/?Z]Mo> lߧe|&2{/v:/{6v*՗3ՙ/VoNUILWWwFg^^1ғr}O_WD7 ~Ģ{&p!5h8c7{Aa7/lBr~:w6>HեCv碚XiGc Džp9S@)ݹ %8Eq/?ugV7pU*b?J,.'qĜ@bO6lf-7 !~?Ig$vy@N 8`J2g޹熒LA kC1QLQFO >#{bo0 `gszrv/bzP;t fa~r3-D$(zJv-RJ7d#Hhg2OgoXy2nnv>L_6h}&q'fcݍ~s yvw=9m`yNٳ"%7konv>bA l;X`vby|x 3<3uN'۠ݡ|.ɡmn裇v,{9(@|-nCvG$OvGrx߿p ڟ[ζGCsu-|pu$s68ieDԓ$Jw e*&" ڐHI$QnPM,oe7;bk!d:<{sS}T }#jלw.9j0TTXJDǬE]A$] u1"ڍ۸.>37ƋwgTW&2t^Ne5C' ˌ,=]-Vu@ò]1 Tq6-Gf#k\{&U8h,MFE e'>p]2\ձN|vֱN|xkí\ERNE^_ ]j̎z%S47HػNV-!U l’JC{#c}[[TzNJ$p<\QaGyoPg1ЪjBCZ?B٢lx$ѶOJcђ`2"0k_.| bl> j3oQcp^+] t|| "h "WG Y::a%$I! JUs$'_ QljP"'q QY^?l*رfj[uz՛gIaff9]ۣ;4Ol98 ǠpVZ=L.AYCE+ l/\הx'(fynUԓx,\IG 7`- dIsomP}l/5+TJbK<V9q#Q0SDÕ9ȃS@}0 g>.[oc蚚,qsm1V|z/gWZ1ǫ6@)9` ,G՚4d6Hiʱ {)\F jKr(%Ci.lJ6g5l&QV{ _Č\",".w*-Q#B:+tV桪{5j0Pl0*#[`.ՕdyaѵU3`Bu,dL L bD#7<9rb]W-Ѡk"> 7tGH_CU#EڸHq(UOA;R-QfDv99:l>qC$['<}"H։FpwBF標H, $"ʄ 2 }\DAO#);;IىgqN2q`WQ7x1$\vNp~Oqruawb_ߗֵ UbوUWA[z1znPKb%8b+c·JoM 0Υc ,W4`R|M͕J)o[7o:1ڝk޲bG^˱< _]WmK=+fH.Jrjo8M"3}|+,uϨӷ\39s(=>CZTR*ڻvXSp*SƨE :ږy*R%Ūŭ[ i1F񪍑m)j3Ҷ{3!_t(h|LZ$٦pJ@q'EFƸQ%LQ > B iA$"r(Aq a @=|ܚO_ϗkE(ʼnJ:[j,ZlcZH5'$@Hm XcD<,5Zyu)}D1\UQ"u QţAq&`q$\6zμ,VZBA9Vk;'`_^vӪ۟=~RbtK ЅRT3ʀrR6e_e"$UQaER 5[vѭC?#ymގ}%d@^~]v)u5]쪭".8R9q ~TUԢrgC+ʨ T IKOi0B{;#x3E،>ݚڊw\K95d:TV<}u>RKMg&mrK@Q{.>/r!q.!%)Ӣ|xm.)הEr+Ty3)}hH }jV)瀶+}%W/f~\nuG՛x44'HNKׂcݠcFf5^W}43,=; kR7<`Հ;v[<}K[6XdRv^C5)xTw̳IyimC d?(9$dm/)BFNϕY9=8`w/ NխЌ9_椑NT+nc.][B͢PQY*(ᙘwY Wk*%V5,{4l%R~Hǣf,{aVi~_/Q<?D" /C ϟH}g/.-r꼀!ڴxw_ޟu4=!>B6]hlS%+Qs LbkmQ{%3FBIbuf1k1A[3s`Ndm.h! >s-(m쨑arEC\,3; XSiK9(R^OEex} 27U/)yc$A9 Z9yAwyP'_a58 ;{;IK ;/GI|*jVc|gt!&al"#wT01"lO"7>9[U=%#6u8bԚxz.w -vXJ1*KJºϝ(8/mx09-띆5(f?w7:vRb_瑦l]4&IvêNcnS !po_vMJוtc")RR镔ioMo-*4j NqX0E>GT}{b3'Rm؊5:a;ƿ' d&Ա$#A34c[fOwU]]E)u1Z`w~ѻJJJr,/X.Spg/gD]do{H &q l= lMAs(B$`X*qqyRYr z=8>C1ae*ʕpIS+S2ҘDWtqݘqyETmCٔ!d}6 J4}>Rx_tWC*,rz)dJ?>4r)ر҇r&^;5z#G"g&f4B0 ,ghM=+Ch0\ku@KT9՟7xF^ E6tZM:ǪuRXmXimX51X;cgazOxi JO0NH =E2UX/'+yi|p;]'wCml¨#>A0u0asBjb  `:&R ']YсzGD5-pkDq X[MH$'.aE+'#ڛMIGw@i,lSd GAJxFT`@hDqbZoNqz ų7L9Š''z*] Sm89-S*+3Vp::aAzYE g /<^C;`ڿ c!B'tHáE1_Gf<栌\ći[,VМ_]|^TґuQ3m[g2p#,6+秣(V"dF0ʫ'397har1\/ŷޯ)=.sWP0k!/NYRђТWy %Oܝ h9v %A*pgk[ BwgGX 't,>!A?Ə&'#սO 7 U' 5/Iðc/< Wzn;듄sztsH*zXUTR.szU4c˄ra?Lpu^}6*[XvJհ,zD 2"Xy+z pȥOcT?&8`:[$Eqxk^ dtk\8Ҝ8a)Ҟ+Bt"&p' Fz&iǰGXD|DlN1 o}Qadv70"mYr͚Ͳwf޸4r_S\z.#kf<ʸ %o@43n3^YLs@;5Ukm)*+$=u. |&%zKt4UĘ'$_vN܌H]r2_WM/5O!tn9F;F6ܖrc*5Mu51bh7%)w5[d/_%)Ȓi0I?w=C)ֻyP46S%ᚫ]߻ R漢u(֬|l?z8M.@;#Pj0ȿ7P,~x  }7r?BG{u+V#:GinGlvTw{$j+؊AcklFj0*"xtX,^c輇J U'I)Jg`q}`q_(G.Mݫ?ku@5zG˛y0JIq2.\ dxv}73`=Y\렟^w3=_'_;7OfxLJhxW // `I_4y*2QaSQ][SpP/ya8g Ikkp`D({b0o;~8Xskk捛ۨ3}rrRg44yp[Ŗ/N982n^8㗭*i=$1A)Egb*Sfu!B)aV2(MZMjL;*cW%)$ BZR }Ӎ52*GQ6,u~kbFk&aT'M\,SnwKbaual$Tg`3=Qqw3CG+Z ƛTbvrp}+zeZv /ԅRKV-AaJBxj6Y8RԾJd|Ba;LV_)09}e\*x S5G=!BQAΊhgAJc@h ו}T _^8#2|}8o߼r/GQB.|s U{{w+Cp;W7?f>VOjj*Ta6PDʙY+*,&#^ x?tty}}] &dl1.kxs f5'YVhv":"vʩۮ*@:u{76K??>p&oR`.OlL;K@#۩XӄdX.G\i#`]¬c$T0m,l@#e<%viBSyA">HE3czP%hk#T}zǠR1.4CyOzH*pGɥ-E TkA(HfZ/iȯ>L<@(>u6@Wduf [<x-|*ј EO!JRHRKJc<;.3V ܘZWB^f#XuL~:L({W^pcyg)jLfD )si5!╱TjFXl< Fk K:Qs'c58AP 8cA % IY[F0e-(69^!j$,օ$mnЬu(WO[6* 0NpdRĽ 6U6@8UBSE S)ywQT322mFsI1*@5NL H Έch_ ijRG0|Ke! @Oì`Z@+;k; {& ,:|}}};˰h6}_ξ&n49\P~bO @'~CIne6/].ԅwJRhm)2Sa>l BٌVۮp샌n&M^VסunF*2L1V' ,܈< S T`="1+p,SsV!'}5@j/pܷS~ vSV)Z+YNy)\.]xj>Y㭞(wlro5Jֻ7 _Iq%*DWRV%ɉKA1ė qMu\5bb+Vd-p*:]eX )b5x1\A3lL4|7g@^XXøv%*Q s5[<›f:˻Jz;\Nd{ͨN(l9aDKo`".bxleRSR&?\9U0J!h;$O s dKu##iv?^ErIjj/K&6X:ط[H}ѷ+jjʵĢǁ]e hbX霵֎f6KE %W&ڲpt [>\}ZZcJTiݚ}?/z򃹭&VHm0g0% K5',٘x xB~%^QN&0sDGanY(ڻד^]X  =SIZ*BkݗK*цD[]LR ZƛuXX5h@cR[D;kLm3QRp` 7aXreu)3>v̽%8 =Tpo̹T[]m035n*!Mm Ee_ho,l,hzXZR^ ]Vj)td̔ҽsEĨ dZqo-yjtTv;|{VVY͹R&+} eܳҢ0]w`>#DM''7_^k죈.sGܧg~5^tF%hY{^BJxQܷ O "V@P yA$dڻd; ;I膃Sڲ;ӹH(@_*=Tt*lbnn > 洡QK9\:OKie$B@91{GuXw6mG/PFAWOWKcŐT@ SkIh*&2Pe"paF*Ū ݙytdQ&x7v8zoj7i[hvY7]QNtV(I0]8t4,01ݭty,,0E~F`5 =֫I\\.fg(gGgӓ 1_jNqFYv.h|H\M3_Og]T @A_Rdzo9`7c>ȯEh.s-a""Z$^3P@8;ZrZ?_Rl9!/In5^/}CqQþt@wO|g_-f-NS#?Y(a[<'B5Z|Uk٬X2N~(2ĂIJL%XDHwxv9j=XE%Z]0)!=,_oؾQO?9RJ6,d޽Mdɧo:Zu0D{aTRt&KT`!h-*Tny3O4_gdo8A)X1έ1T|]]S_O9OȜQ(iHǔȯvr>>_:Dgm)STB@FQ9%8x@묽 عkE|scCr3blAo|o[zvyu]g#"`}QN~yUkcygi恐;FuQa%0N9ٲԠv˶CG'G1/m]T63 lc ({J6ת<&I fRQΤàkϼ٭c,LGbiv*hZ*7ZSkYE'C5(VC 8@B#c [ ^orR,/TRM.(Q)-)ap}C͞E5U@iy(A_Tf4eҨ\L8P-5l[J1vbRJq[LWYm ڏ-&- gb;/xzJ9b -"F>;##@7sHsz kfM.Z-$cEh'e٪W1Ȑ.jK4hD<@[?%*\k̿pzw+n%5L@HAr(O1J5),YgAXmDȉ1^DG8(7y9@[AK9p/)$R1,4Dl*d`l(bW4R@3#ZJ@8 i':t"5pj'my20IGjfeBg,d(o '.^e·\Uk.ǫ{R#9!u,g;$~HKnΣ9!eJD)eJoV-cxeyGdkWBno<弥OcxwL Xa[ fֽVEw2{@ɪiBX#b\V\.ONkf4u*&2ʟα1ӨPu7eIXŗDjh$*kc< BЊF?iC@2@w] Tފm])M6օ,Cf.Ĩ "`C`oP3;rAqcP@)l2Scdc3(na3&0 si/S25]Ak|k[LX%v (.7[U^-n 3Go;Q2E.M$VC)uڦ37[m 0w\zʿPmC5Z<$sLfkYx%q>8dY%uRC vYA _xߜlR)nbcR1y};F7B a 82FuR%kр(E^c(BD$9rΞ(#xv pюl'¬fsU^̃<X*- N;)𚙌(C9rOD!'};*:څ9#^ GE%J|<RRq48<E$,Pt  m C)$v=$za//4hqX`MK^w1fakxϘZI|1z-e\r5oߛ-+׀{Am$_uѯUmUvakd)UsP!  d=(*>}`lhӳggju:uBThdY))(ȼ}cZ-^i.9{zEU~T!H{oTȫ1/VF_ >ޭ /UC|oC٨K Ez/M^\|ߟ3E,~*⭯.SX?V,{xrtv8=>`1ntOv60]i֍?KsB>G vJ?RXK Ӣ#i`fx0iH/q8ѬM򖀈J@1+Fw) f>u^]+*]ߊZ_H7c*Um_@̌?,k*هf~A[uK۲}mh/F K>YI?yL,8~6 mUeop`ҧVj{u: kJw7~.OdNbHd&y i|-o״'sAHKۆ/V{+^^w_YۻJF%}NiddW i`)鼟}%xWEPlOѤ^ m3%hNvVb&Zr LM6cdaFpoG-k^e׼Cv0܆ }c7*}aTO%4$z;1"8j)H09,N ︟JzDJU2"Dd6j޹%647gQQ=M[SGr7uEch=R3xcK n]X-cJ*ǞYΏ%ADYC9MP1rOd!:{%s 1f8:ǎBFvL5v] {<}eVK.XgV ~gHNy&Xpni!w ^R(#+,p~ ֗ˇE tLd+Qd߷9{8J LjZ[TlE1$T[d%RTCJ>D=H]͹Fgz8brT>Z}y ٌ%kedtiw67fz{!ncyuo$2jdSt$G?^s5]E(.=x/M\=9ۗ1PTjT+ݱyBhnA@0f"ip|Jp6kUȆ6zchbhvcjm/lyq}+87W6E9yy3·AZ^QV") I0YP'SV`0*DtQW!Z4>:Zzr[WAY}V7#,VgVSz˪seܡB wEsM#仦ƺZuןώqmr% ͭ4w lNr6ݷB#Xm9JiHQ"8(z%'#NX@;N'|rjvt*3i]fHFJkTX%y gP4lGxC ̺rb`A'6J acTj6 ƚF+?aw)e_4v TjD0q|E2ȚU "+w"6ojNer[BSm }Fy0lF'`Glfgi=wEVγXOw5H~sHAy.l;P #,ςuYh5L*F'SC-V=XeM51}'zaK3HhA0 %t[{ÐbGjˆ:b׮5vqf l.]O6op XGr u h#zFe$U&"#k{F8PTl;׶Q0(-M7'̹RB2:GW@Ğ ;S [IS:`MFI6K9a52mtւr;[wOs$T!a+w(*?I|zvQGrTIBPێ!Bïlv+{s?-][yN?ZMjnc r_Ӈ_#|Y<(am:Smowsr+H:><&Ӏe~.fzxS|pt8p#4+o!RCcu6%J(= ]k˫ߚZ@O/47q9p&NTGH/= >ZE*s5/Mn "X[nWl>{u]ƿ2;qN^˷#z]S?\ͣ_īUU;P{hvw~ڔZ.pt5֢OW'#||7Nk[Y'IVFdɝ[ޥirTbʴTxA5JBC(!YU@#`X_?Zqۑ}@O@Z}n93N<@/C`L7zzcHy@$A]|h  0Y KN T|hT$'˄ݺ]>(ZЀm'sRm.;ʩ}1yJpfaY+rFuj)Y%靫X;b e6"T* !U!Kv_h"?w`TA:ATI <}>` 42pzV0${HϠ\Nhi *dXW<2ѡJJ;J^"/2ytPMX D&˒ EeRp*gi\ .%KFR>_bnMvLQz+VBg>W:Ŀ|Y ab/5ɬ'N!&:d 1 FGdo)H Z(Zg+6BA9Qm[9lYņX&$jb0DZI܇!fm'{(9zDC6,2ĶIwWB)ߠ1w`j[zqaQi-{src*X@ыA&X0^UFt6[1gdLφqjW׬{?Jb77Fyu÷$=J'j탩vF[ fivw61JV_ dh;Ra;-J2VT vjG]tRi,-(NZSm'I_)'X'`΀/A} _0> g'I\^JX͵1Ǵva@hI-Hpm-C-g520z㦴ZL=aM^2ҏ b, M "വ]î]c-/GY.04,]Y,v{`[?fd]M_xX&Pt`Wbe2r,Oh'-F!bcٍSHEfHs9%4"cņvg ml̦e3veH{G^7OGrSt0_]'m~|rT"u<{O'X?>r+֮1sj7 LM5~2-dP%fj<>jG&.fTǶ힊e©j.=u½y{ ߬޳;kxu?~{&c;G 7*z$twkkF^k_v%[Qf#"z.V~}MM~36Ylp[̠8- n[ȕ'zzXgYgizlS3;T#^]*laCb!$4+tnvDa@$TI-9힢fDQa0Y`YCXk6ð0uF`n4Ӎ}?M&=uMw;Fu; 70,.s2lH%-?chj>tn|f4#* t&n Il=sFf"F_~0qn:QgћҘf*,#ڛ~ %&BZFa"Z=~wD!h : i $R{7LkS|fJ=:=#Dn[Z[TfZi ߁'Bz^:-_tϢSU\;ZrPz55`(ձĊ9UsoWH"脓ipۻ@Dp KB 2 t/sS)04f?g9DyAPFjExn0'OϪ9EА/Ҽh-LCFkZ@>AD G]X[ EFP @f2 f^`m/_ٚs_//1?w,vYqbscu/n5K1_i~l>5CͳeG pͣ;K|d&uRuV6B=:[$} V+k3^:~}~$+m%o%hZqAy-̤_lՙ9u]%{d6R@ %ڒ0e{0/c,^kAӕ+I_IxhКG0Bӽ4(,?'`S >{N%G.>O<'=wDV۶g}НJ\PO&Nȓ&QʓjVATXQNva6sp{(c[a-"(o+)@K:ep iH+ѩǜm~ec\ c~/pƏY߹.iGj9|U>v7uކ'ۺQ/p꽇+ :&OH|@I5l|j.G [ H8Um'Z;)1;$w/mXhCw!~/.g>IxsU Hۍs˷t[i_~*?/{33P3w~Zyqg tH noc+6- MK$xxSw|A7jҤɩؾAKtVL<޿-eMNLF*gtxo7T}LYYEXZ}a@{?r{=xƐhdvۥfrK΀mϴӘJm/gDUI{RFE?G?EBK)WONeL(P t^\ݖON<]2J?3K:H€|zs`ql~ۨ nNa%~cp!.s֭zq ">n8ݮZYTde|[i{}LmXvj4s諠HkmH9]} `,d >l}XYr$9 })٦uH5oQ2-dWW]U]]9qR=W8DBeYȉNc2ǰG½ ҜFDSX31(Aӈ)TP\SZ3ÞuJZphX3JR}\NhH9MnCF\x#3b$~Lkyn-SX+#C%P);A^3qZ*-n,X"A$ϸdHJ G;${O8lF2rK)ErΟpsjT:0V,\GB ̴j4kbvJIGfќ  - TgUf=zm+*&9jjJxMf+ sKIۏU`b4!EleQvv?qSS ~rȄTBlO߄_ x[,3L1Q`S5Wǃ.CN>~ҞjF! U4JJ mY7?[ Nlu[m#.;nьVnMhgQ:#-&diԣuAՉmuq[4U[hNǬeݢ'0Zc!qT (v:lƀ CQz'(,'$k49DrP)aˍ1 S100,e^5WjAF![H` U4F(ƃ$MA]ʠĶQǺG3hUք|*SG[֍RV-VUT':֭Q\WYhFZ&43Wѣu -m~ W+s*g,̲C(RΦZzfHâGV93BKPIYfc ͹%Ao<|M5/$:s#$ Uj iEXyJ)bDSUѡu< o,%frh#u-S:,=m Y!$)(ZIx jjձmżM[ʛWwwn2]Q. BێCm Euێ jn*%rvv*Nؓ]?`iM RW-XN,L%Mw/Еk^瓱9_7}6iێa^Rf뗘fvY֜(n'%H;Ol\yXtE Ow&0"+ Lj#LN s&43,&uޒy8 J2YXlD.Hrh>jȄ)c09es%0X "He !#!`u8+\j1Np2m#%>t(JEz老hn8pqܑ 2,ufz;QՂ`ξ6L86wI5E%!)JmJͦLK 'D?UcА\Ect c1x?muA%Չlu- clА\EtJMAl[A6jmoCYhF+oh@C>s)ZQ*q #;OdqA،n(5>k ~h0yl.ɛ-p6_&nC6.B -<>ںm]C[vY]q<|^ z|plo|mu@`:RM/;>>l8wnœ,T$ĨsvpŅL]K7m1<ל+g4,b&O@zΡ0B O (p.rmn}ϺbrhR ܆2x8Qb~TQMu ,AJ$gCD+1bݯAKd_mMSJ ^Z`~,aL\g]r@10PQl轾#C17%oY,立TKO,D!//KKQbAab,d`y r =J,lCFwH) >@i֩`y^s&@!,Y MBrT_*I,D49DAbI,D{I.#G/i}I3.Sҧ7is&y5O8}]M.`@"eF )jx5Ot5O҃ '>Cf?0Pv LVG,FsԨ9nzґn+Lo‰.OjY͵㹑 س"+B?~w̠6(cޮXBpߧ3M@xz(s/ϫ`SG}[Ga@UFC:Dr;4ZwBejGjk(0{Fws,pnLӮvNTQe-۩M N;Nƫ XuAA Ikdn [NIfX!KhE^oCbMzpEn`.P&FBt0ƸcA=Z^J}qJ9MMuW =mZ{f\TZg%L uԼY B@|~)ּi6U}yX4{!^"C͹di0gл]J`M#s*s`w;D~6 L?y\#]y0]=T!:ēx]&E-$F^zD%vP׽B(> (AЁs}EPHKg ?#{e0ޙ_/)X SJJcj?ZحJ[9j S[j6q>8dA%8w#cܝ;e6 ;N) W>N zƇr `ưSTPBSq"[BTy8J ӎÁީZEPJrJM5fίv lLg.R/}ly~ hau\Uq0݆ h-K}*#)ekQVT QBmJK!FwПVI<ʋiUTsMbME-ҪD+zZl%ʁ/Ni8=iqĐj8放T !;6 vmBxK+釧X#sxʃj 5Nhӂb+ D}픵VvqQj;,PSui__xj:tJ4В{)qTaN͂%-W+ B}}I@%x:^xWE" /(#TMH Yn &WE(Jux$B7d31Y~`bib<|*YVٿILd6rbI>963qpoTcz V@t*rƤY-cꇫQϭS,^\x9pyLW"H]eU_jj4zr0/ɏľ[fEgbqiZL "\0~>lR̪"f2vv?qt +E?EQ9r0^$_]:x~u!偡7n%P0jS#e.tUX^I1,!vɺrv?ܨG"+YRb~hgG0ԓ2Qew_MGX.kr)ui6f|e9Q`Ƃɇlio1WWP7()Fbcj_axaxfi&P$\KCgsJ˹Q|F\c"<12s+R2ɿ7 0ޠ?/t;ܓ:6&Y ؆ .ZC8_k},3Uv fޭ30Bb l)BH6^,͹8AYƑD[ZW"X|w@y&P0`e!#pX>bWdC\/ Q{P^:3X-Sz9/k7$۷&`q}wWӥZJo^zAdAi7\ˇF;Kނhw?@TMt-_Xund^bW<dŪ~:[с! _ꙦQ dTIEjkBz5!(tHvTAsfN@cڰbt[6?Tʉ>_ɼȼ>CRZXaf_R7Og Vk_RP_t;M+Tc3)?b!qxsY!*o3{<<,M_osgbzPLNLC7{9 V0m @;H27ǽ{G1NF8CNQ  Q+ȴ@ەHyOڤ;R[%|DzDbTh@Œ5{S S6W^V%R|{CΉ|ws̑=5GEAI*@*Z@:db6T^LrZ& IJIe0~M:f~+ G@x<Ñ^jrE+D3{E^5gSmC"'G9wBF !?)$fgDr۸?Otڟ;Si\壷SrZ2?2'np C+(y7g;H)LGo%nwxܖ{n\V^̾_.wקGF|ٶ-/,bK(7[hyǼ 7n Vk>ܔm/OӃzOYASxG|a甖JY=8i)N#^.-<Bp1 ؽ@yş ݃3dF?p /=z !\0=R0ƢѴI4sQOyGf[Gע85?E6*YBd=.9x!)-d*TƇ2_id|\)Is8 Ti6vNNSg~WY|Sf]dQbz-Wb|z3!_A>79@=t8)|n]yxl[6{ˬ%G Aסۇۧ׾1:Tme3}Ѽnz8m7F`Tc,{EZ>==܍1X -5${K1%EqT4"=w(A4a/ L(2~}ߍ0\PE +8DP6_)M4K&X 4`LY|osQ )XQ8 98? vJCƑxw[H*rOG-eAA BJ3ピ{W6P ݮ\zE6POnhHԗ u6 4Ug%pEZJ-HP(I8S1BI'ĨE,12i0(e9z[x3TOx9㗅W /BA  (!Rq \FWТxEঔB{@ \Z;dbom`<"/%xYeQ:b,zC|yMF")S W0r{)OB f-LFE$IKf9v8N-GXੀ+ml?EF* hFAH%4P ǔUx 8{UnG;Y8jS?ޡ"Q}(5)L^ *@~PAu0JgW^XmٗWIiiüV$iXq/}^ Bm6 z "-,%%_:\(CI.R,В@U_0@ 1,'O>64rq*Pv(4JX i$2 An.D1 pX[E lk% H%A9\{4!Ϋ@!wKF"<%ְ}9[^Awo07*}z_p$ nڽyz毷 doy{kz hp~OjloL_o2 "bȼ^):oO~?-o6I_Ϝ7v73~^.}KO.^I *XHt2#osۼqMaf_8a%9@rVNVG5)wy˴1;YeRVL&Kzv.^PfN*Q|?Z]>ifk(-NX${[gRkr bM%aV{f=i%8-W9s B< nYD&?~Md* (K-IS^58:-Ox|ה׮~<_n2:)[O!WW^]jιWp鷰^>Y_:e|gJ||r[]rU~- cQ; Rvwl{*5WjqqUM!T0;OShKpx:T%Q"w^W'&#ʗ÷ҳ; Um# EHj۾BSu4w+0 XS:U\JX*S}.$+"L>nӭRHd=sԵ ^{l^T"^d ]7VR8Z  LЈ*,1z>Q#DA$IXfR4 o(6yn"VkFk{+4о\qRc^Ge鰂B=6-!ӊ|И;B&HzaGRڶIY0{ IEt*Jƕq+ C,cT%VYa ;a7o|f!sr:b\3cA =W? 4en+?&$)vwOƵ{Ǵ^ݝ,a6,lx.wm??56WWHC8@ImJΞ9EN@#eAӢQ3ᱪ>hѵ 2 ЂdXG9 *%\vER6P:)ysEiaj2F"&u8]\UO_BFa%>K:1fIv;P\)G3S܉⪕%tԮ!tH5;)%JwJP: 5xt4Z -j 3l=!~x4+vWh?jywC]-Bv_ wn=7_li?V E ;[osv͟0)؅0,C~w%RaexPQC{~O w{RK^3g(GBᥠv!qOCz#;m%Oby:77y wn(h5(20LI'6a0PIp❢A?+&_Ϗ!b# ǎEf-Y/b^SM$KDE0Z 9 Ӣ QqQXAlK@9B>]? A9%Ҡ`.`pLtC cb$;F˸k}WB¥x'J9%EDQ92QGs(>UF WSF$q ,_ok2)Rsjlkj$nW§d5Owhu 5Fع7Z_m+εOȹ SSq(|Zl): OCz0ȔL,:P)RO 3J`T0&a"5J19^&/w8e|eo\ϼ ED?FnN;hHR5v+hvBBr-).'žvC0cjB DhwvwV:lց|"^$m3ݩX-u_z?}{֝)C-?[w܇ϟ^F="`RYG2nMmדǷo^O͎2~QbO2C";0KL6pc घ`;a/ pLl2ݲp@~0E WU&SvMHt֣T7dlh?s7 '[801p/>Ў%fN3?kd t.G Wh:t}2_a/80i "tW$+惟R*I!Z(< K} &Jd62Ny֦A\;?w~:ɫs]Q}٫0k:ZqB/,%6>kѺB8z֣vR`Gj)W )C8y)ӌ q/[vpIs#X F{'X(5炶kk-QKpz_lZ+} Y6OQYf&N2c0 zٵGB` n0Wlf'AGe!),K0'aٲ+n{p.*Zǣg-?B8ƢR%JQMTU*~y9Su/(/VTfOg5\oka<ڵ<ޞat/ ?+Z}7`NqKt+j7?%@c~^fx`d GxƏq ZPZCl)ZM?}(VH A>YXc2;pX ѱ̴T52b7z P2J{00p{p*8)g@N;Ai:U8(' @VaL*Fޡ~.Nn߱; ;oqN.^wY+t{ɓC}{xp;Zݱv+OƛU:DХAW7<Z {$QйEnz06]kubX݈Epf}O& ڐ>CUhCP0#ֆ$`Lcp ̸Ψ eXEY+ce3!sqd9ZZe GP*!maP4H)8F=4=Z$*Gu+T>}us2/yJZePqRV 5b⩓s0eijxF(<`MBQT`lE(gF]-Tc*05a=Z,Шð_Z+c!0Crag֒س%!1i4ג֒8xKBGۑ(ڑr#Ec&^{b\V]Q]e[uu $[;oq9OV" q^38z}{-TM^GZݽZ zKUE1GPŰq ,VBQ1y 0{ &dw54J(?\ow@IyERo&dv:^ B>辁(RΦ63=|4y=Y̗[xo'CWƥX ^=/Gz\ mn%|q04J0C]YHyt;t'G"S%Ћs0V\ H7.12%:x[cCIYθGAщmv۟v&vkBBq)M$M1MAbPEtbۨgO!nфVnMH7.12%$:ņvӄCnT6hţO%}0MhUք|"#Sハqk7$=[,UD':mqkBgEZU5!!߸FT桮lh7̴'ϋF(+ ͇0\3x:7~Ѕh1J;DRdñt)b7/:tS`8XRpCa9G9X'h6-P<]?(0V>~bk:ȟkqוE7&^HJ4TUvƖ : ƥ3Cgi& @cpLJfR+mޘ ,ԍ uU+u|Vm:# vWIiڵXyj?i "w#Iݧdz:,I1)f*)-A럾Yh0)̾d.:B?Yyg[xPͫ~ֺ?i F[zڨ\ZBKe& 8|>XO;.l`TR!`&A6|OD&z37UN&3W󚬽?T^:rMJ-v69G'D&͛EuRZnAJ3$[*!%nC]FY&&p@*ۊTrZr1R07Cӧ.Xt1߀?T83/wG;p/;>= n4;a-uMMP1…l\$J<;!"0Jknd%&[ѓK~tvyD8E)uAq@?_p.~awLl%'ZP pn3)$E|x,x0L˟Ϋ18+g]/o}g~l?c3qח(_N8 J훌gvY\ˁPݽ?z[m݊Ec7\g_Z̝;/aȝ|hv?;iPׄ ozoI-e\ˤ֙IM˴W6Iz$n (jǣ "ә52:Y*Ul2N Vv>/0t֗||aF>ADt&t~2KzXJ9֪$a{HJŝgᰎ>7'vxvVq󳁟A\;? F˯IDJ_Q}٫m- ^]t -Y&e߀O*-hO(WC`.t kdO.NB ՘7JcW͊[csilذ5IE0 x0`y,?;".5W*M2#1)ᜅ. TC$Q6{,Rq⹠Op`Z`#@/7@(:xb eS29 V7J$ FLgJUf D0(JO(5j8h nazV.wE;^SL;k hDb4$Э]׼]WjQvAu II=vյFH8L:0.f֢_nmdl!RP'&(m]=bDqtcKKםQ&\j^q* &+kXoDnRvϾXWw{LLg]SsnZM)kKl9rpL\X߁z@DX'wOl!JƒN[.EGpHCIMZA) }W8M[RS"UR+rq4^x-9{ixFF VZr؆E6&0ѣFJ.3P3.!`)NS%M2R ff9N]i%ŐOUw،l׺k411ۈƎ8DnT#GGaT! 9iɽ8kl"m;r[/Q(څk1 )]=nH<),#x!ϬYiH)4fnӢBq"(>J! h9QƼL36EndC4,c>Ι2',V&YQNPD9 Id0Ҁ)E W:֢G JA )e1_yd@*= hA"\Wm*§B1d;HXHrʝp f #F "D2g32>F=Pk7w?#K_A|zh>nF;o5חף/(MaSf Hiwgf4"22h6N ~ql3r(Xjx7#\ V%e{MRKNiD/UUTȫ)pb#ViQHA k9()* ]nSm(v9BB4 &nXoڧd-a(nB5&S7_¹/KEt%mS(vym]9I>湔2O|j0]GYA-xU:Xd,h!)C[l1Ϝ I BPxHsR{D%=?KrfI QC,_g` #Fa߹,Ԋkr)VMA*=YKi.A]+sÒ8J5zWRMQPR ZLII`L8yy/eju \fnQ9ʑTs/AR;SlӬ,iO) @bc#}!NLP e u anӌΛZTZUx3c ;ߟd2ͧ6G5^^JaA,By3Ȑ΍jܔ.ʼ!jQmU^T^>h(MJퟗ?mHzt҃&bR9](7cr~bl#\.aI>éEy\y2<s%;W!ؐ<*$"yY)jTWa$YgjP,Nl ]2%=MCdi_u,;s\E ysL5O pvI\jR;q: D0*(ІljGIBv[tSǒi{bƲ}0jV6}~?5l8ǭ?K%f; kgy/ )$voT (#`lB2%[_T;f*<^"e:W'U.xD؜+D^Jn7? 0I\B0kameG/l쓣\<ړ' pVv9+6L.+ YHb)0:pM뤚k_JM֜0h Mv$WeDsnBrMRߪQ&2tRqT@ NQ Þzb鞗Kԁ()9VӟVSX*Et1I($ҵ{#]oNz>+L!CҞQ.{i=75Bh+y[bU`9Z9m!;e ;س,4@NXTjB|FgESk U;CEt';˧)jOFBɎGDPrJZVwNQuꭝJ]h 3A,KN|9% jIM]{s!\^vlKJ؅N |7K!Xjt^ m=ֵ9?n)KԦ_GMBT-ri3Xs:XG'Enh?OP k&j:m`J!9LWA{ +&ܻW A߿ʔw-Ծٿ2@ko q LlzꚃpꮣEr+2i\"] [gGe I0$^S,1ljhEeVgOxC; b*Qs8>Q "$Dd9<ޞJ/^4AL1qS LmOH+`':tՁ8a"nˉttq*"ǞZڡ TA. K޲DyT6N&A-*oI1 9ۡZKؚgL4 L]kN4)ANV{C>iX`GKU %kKHa/V6uifϙO:@! Djڛа$D$CJRKIQkuBqzXHry?)f$0Üčak/鏏QW~x;n4.X<1taW)j^HBgjb`p2@',&i̔V~eb|Vb:W:GuOY$McṉAU恏zzB^,oAȩ1;P~x4ҬyExv;VB'/yo"76}M&)Y:h! FrףH(rƣ'=gnxP%%q!-=^ hsˮ,50twa<j)y,Dɦno -_ɥ @2(ϪGHZ>eQbũGދV$;t X_F6I*n WmҨmյQ1>=9ʢcv},*n$< .beU:Ao7!cm=mUWOORNpq6)N/26u3EM(ҍ `=AӬ@d}xd @5uq6 +On g܄<1@12,K{ 꺽)3%2 qsW |DCWxeeTi1cgy g 38}/ -U#5o_ 7'S!-j9t.0F&a^1 /B81sfo~B@!<&<Gr4rOe0@.> xTj;jxľtr~Oy"KF_y1/|To>ͳtO;~tt@ 2__~Ֆu vғSI};}\cBb䐩Bd 1C/Rgܘ|}k0)!nI(qT5L&xy;&Tx?w Qfi,Xz3VRC'B&5ƣ[&`f(O)ix^m "vXD!B]_֊}ihPj%E;4丘'*RJ2r,8?M MzW{5:P xb5|27RDf}K7}ʊ3A9\F$?*%q(y ꀜ n iMD7_fJ9|`aK]tNo纂PF~v۸BYT(5ڔ +͕)R$fHSA|y%1?t}?QaJ\wq PG~>1|1R /A1dFK?ԟuP@b#sMC&;9;SbNDnu iжJ,m,xgkC5VEbPM"Z6]#5$h򧎞ATАRBy& ] ÀPrt|Ѫr͍xQ(L *(S 8 |:圸 C*,t) sN-m}.=D!$erٟ=BC~4G =P * Bn|](߾MhARa^y_l@fܠ[QGїbK(GC%/f CMwfؘKnJZ!aVOsk& PS1|?F)%h)0IX'PWN7@(b%bdg4rr:l@I]G?@&9Sް4)_ s}!=%,ϐ2}REC:5@Q (w~{~EK{0`#z?p/7J o5Af\l4+'ÖɆ=/E* nЎ=e*s9 aʑvK[ֽEqB$]z4A21{irZ譆$O$ʯ ;7 J.+|t]iヹ;4S\!N?PV1x3HhtgQY&fwGwu^U#7j,^znD}o'1-:GlJ~NFhQƁdV$#ܲ\09T,.ZA ,y#3&M&溃;׫V5 1PC?8J;0 ꯔOky}4.)"kw3Q̈́')I<S H"\.135'>GpMQwJ HUV98aHm !&BK'By1!Nh(oAY (C"2(%C)~(% z"psC?)4,bͪf V0!=Dq^HP%hotC_ߞiEpTe .GOTݔ {YkM,M`#AF~`IJͳ^~1"M?nwWd䋽yVp~FKe%lBk𬙢ݞTf-1"櫒WwFrhɄ9l k<(^Z|\x*ΏA0Ж:[|56g,K}?{LT];a_oӿ}gWk  |ؑV"5%yHI#]$t4E&p h/ectcKjZ%wK9Bk(kafDZg.-U+}YoҨt=t|}ܵ/~t4,cCSIQY2[S( uD<1Q'K'wTmW0b%(Wm!3ed{{*F%be#B*j#  41ж2R#. (vm6~]1Zi^gŨ֊Q)7Ĉ)$y`KU2zJXtl:|T?󂗷Z 0XI$P>Vׇ==3LȖP!&4@ff>2 B$b"+o(‘ )^hYŭ,o1}iN|'pY|im6R4O5i;>Xbxw&܎j$5x{L~Z~TciXx{%s U=P/=%I'Ge" |B˼fI禙\*PDY#1(:CZ1́Eo:]ԁ[%aFP]Z+FE툤TRH*G D )M7jk0ע4e;!w̫ 5g;:Pp<.+)r.rk(["c\3"5.cHLFfCSo iHDe=/ aQF!vpkɈjMGD>f_V}װD'3s޺ɧAC44Fʧ{^1'=c(ˮovhvŏ*˥ #XXۙ.[4mkkMц\bҐ*!3)@X\lһ]$J`cE4,z!E"0)N*Bo @S6kQ~Q)YnڄJVJ1# 1Tz2 b 5+D!(k#JRtH XR1GLZ3jϷ;/jώ4_,S\/1f] JnMQd'뭩CŚK륱O<+V+6yF6:x,X>0o}2wxЃfT;cP1nXVM Y)`mvbXuDy,Xt]ʙsCbm#cۚuζ"S)ζĪק7?d@);$H[c! fyK0tri"%{'mٗ9xs3էw+(>7~Wӿoȁ5KVŝc@(= W}nz[m}$Ui3WVlP@ˇ%&&[j8mK1,qz,J̖{Zq2+-#J0]#mi%3_9x3ofct=n$m~c(kӹ>#Xgs/c:S"9i#݆ҟH 0BŎt-ܿWhu9w φ!%vKZxx&R)K5VkĀɤsף1%OC!UvݎnD:+-j)QLJSnȩ Wi`)qhI2U檁0n.fE 1p;GoR4#$VHL,7^d cV  .!٘}ؤu>'ȵu657[E(TI4&EraDVs4tKK;덳Ԉuk-m)3H'…h oعhDc0\8$DtQOD F5BLQXN  X vqOV"@g!FՒ1bXgВRMx O.z" ԎݬvKZ~_=x>%0bfmEYBJM&Z>? Z*')ԑX{,r8X)@Y/7a9RZ`fuI?u!̭W_8؂<4΍G%MCu_&0t𳻽d<У>X?Wx> n\rWAg6 @GcUӬcUO[f|Soi4K_r\[<'=8uFñ#!/\DKd*оUl1_5 6\a5l?wyoo*_}8xyH:xa~2Gbt\%|]ӢNC>Vm=B.>yXL k :%bPKXk8 A2;Mwl"w|3a 6$䅋2%ygѶJJ-ȊItfڿM-&Xhe)N=YK6#csxv qwԧPM]p#p铙n+FdK §'Crj|ͻb!DáEe' @Kl N0$YYFP \fnЖmofJJ(2Zl7Mܭrroz-Y ʒgj.ȼmOchhCϾ"H]\]ap{HؒI:0i5Pp/kmcmis%T;%mN$= OVކ݁Isq^ )/EH!P{MwBnbv@͙"sU +S)Qsi\?aW,Dkp?Vp) UUh[CCzuZjj*`M޳=& vl_A_od\ƛx٦*V3˷ع#&v8yKS[ݻGaF-77&9j,D,w7ƚ,\,ܷ]} A X-# L ZwQ@`It@F/Chݤ]j.vSaT+#vp"rByd cA@Dt`uUpf{j"UTݸ/I^G6y|w[l$_)e;C*CV;m}w{9s!@jsfU>.XYEH!b%?jEwjK.Q\UwyB5`Qѫ=}3ե Wmݥ}ջg-.ž ʦP^5} Iq/Ti_8Of ŋӋұO=A,7?:g5(+Dzpk7͇~]00v<uŁ:qN$R yM8y8&ESL3`o ~|(hr^fױ5Ixdz{:e6ox1̛LojK|oưn)`?cB)懌m"S϶njqZ^!C06ٰy仹 X&]G!^0ٶRUfR-ErBȥYj6Ϙ,iN)Հ0(թ5^fΓg5\6PU\%֧{zJR?^ PfGiO罡MgBZiE}MzX`8cG?Q?Z!tO 0 rn,f _+sL{;h5{ %k{R.#Jm & +`@yBLC)aQ4X)< 34&Zc"F $խd L3h׏\;Hf دh@,Ybqr]'+~^uހF_B0괔]K#Y唳|&u))f!}9>g(~xkv5U;C$7FAFc)Ω*-(ujUXFdžRFsOňtnT5 9l !&[CVcғ(] R4+8@Z {.͛wdol'Uv˶TLcBBcP_,MO` W;}܁wc2{r`0w0zֹN'C=Rizqpzox0gOOK8{ 72-0W-pB7/W)4a|{YP~Wܜ͢Z_ڳbastJO:$mCHp}6HŘl+3RzF@|N*;ީX JӂղRw+u JX\b.88۲N\ hFm(N[ꅔ!A*}(r'ߎg{oiR]\W<+Nx@`ev47gԏR?ZHh!RT':L20,Dxֹ6^*@}0mp{h]L :#(ҧsXxq=~/(SoVԲifbXCs0 jK- 7` Yo ayDI婓W"$ZJ$= 5@`x ,P\5Ś6S[w! A=Ƀp ?ORh>Em0֛w?L&wf :ai0F_SgyN.J9ԅPld*?-%u )$ Zj\lg4+<7fu`7fu W~D]}N圷4٤~# G?s(a1v"{Y$j)=%buyHu<мKWݜeƻnX8(`MWݴL97M3u*ZcA1!Ufȵ KǫD?ƤwUt[xT)`^,0.[J(Dϖ Oތߎ>m,ѐ adz9?{;bӧŌ<pw:wy^nqp7| ĝe>p8 7/[}JUqU+8E ug`߳T\z4h6a1&Vx'm$ǘPVs 91c%m3+kX>? 4-UelVwQN -6tD땪0XwP qOxjBOKIkC^9Ti-햼'Z*j?jvM͸|s&V,_s^SfQR1ٖNNbK.pG8 ^S$!>DׇSL(!zMT2ORɋڪ` ⠊M1pD(&`@VhC^9SJi-7M'?wO?Uٿ+9/GEUE7G=]9bB&bT\GDCv~:e)DZU A Mfj+[o_n0`ctWxLa"]pMsO λk?܆*l>KF&L/$MiA{ucn?}we@, {;[T`f9ߺ qֵ(Pd[rXQF2t*YCզB5pLU08mN5R p$i1xPʟ8`q0=B`RlِU cyJc"0uB. jѕʊ`F靛 b[ab'1&<Gl۞ANjnL5+gϛPQYςqM ׯJ#KgL(=+4MtII D7^׾ ,]h'q*RC1.Kj_@3K:MZa0.?qTY-] +syLA 0XQT:~ɚ )[/IqD"b$Ψܨ,"1qƲNR%IukZTkrwuPIpsoH+0 (%UnY+?:1L:W RՃF%NI uԲ*(0cfyN+4yLTp\嚦i\RjUYě{xvNHTJ`{!Zr]9G( qrgx/v ǂ/{" a,hpJ ʥ5{3Ti͹nUY91fV,=PNVoq)o;ީTZQ(' ưCk^и>&FT߯eL{T=SG(\ebМ%gInB}{HgȞo^qDY{^I#pK~?k|~х_HH_>`ǃVƁuk*ؘ i,0`Psa$l| _#4 \uL$*҄b"'ulr[IȊ_IN4pɓ ā|+kFJԏ(H?FA1 ҏQ~9 RfeJT;m<r op?Xؘ*rDI:ެ\zZjџ&蔼zm}4) z{3\|2 O}YRܝcNYW+-ૌt\HeY jsFy/!lg& h ڞI|T-0,^@adl~M"B5S/B|Y9.aardX-E/$hTA˫4pQ.W}֯`RoZL6A l  >AW:)Ӫd0'+\UQ9j(Ȣ"ϙIٌUѱӗmL ^6a!P;1D/Jqx@1t7c$e9q" ' :5_/{`X9v"A*)F2<Ѡf쨻nR2_ts[Rq66\heWMkу#b :eڌ2yhY@<)e9H>NPn8tC+[N{r'l%b20#c~l~%<E_iGl8Sx5>G0%3G3iih \Yd.r((f`N XsC`Lh'+'(&Jɤ-9 @i9u(JB%m\0LŨ΄}S ^̔Im1 uuQbhL`Q%wQx4XKGIU= f*B$#z%b #=(6b#ius N RgJ. RF"e]̈́;6SN[C0 'BSr6Qe쌍ǖ|;%qlSXrl}VˑC/D7L$Xԁʰ}Ml܏aBq{gZŲ ʠkȋl썩)PʢSPCq7疽hM"B/Yt"4[>ѧpOR$ORc7/<+)c Q"1dRP:TхR =یz5وWdW,@-5@a;\VL|aM70 G :gƑ5h&BYnn<6r4ect](K~ IJѐKCOey RhwGvd$ա5C6xa9l`*Do(d@?1Ȃ-%y* Wd \"ciQxpZF _,s@`6K`a1X2f4^lg(zZ[k٘SV7gU`caKQR ( ɂy.''%SciՋCz]{ Iz 5J6Ճik;# a5U"j f 7<\Ոjl[2Rv)566"_8`1gT f: r2`J1~EQQysB=U( ӂ P4۠ծzJ."m486e)+0ɶ|2ue{e(dܙn)@G`jz@ nMkHU(hȸ&"9B4V7*ypO@o2OGi=֋bi֎Hˢ O`5N~+ I#LVfySz0}n,@*SuRjisÈ6B:. DYGB[!氎}8t$b2o8' MbTxlFy:o_lQt7fȌ!uJ*j#3jR2^TtzvZ^5͢~7+Rδ%WgVзlN)PfcWo*UUCqv97[L4j xJDXI"x.sE2&Ρ*MY thK;/lJim"Y`>< ]0XPHF/p%3EaռV3MgG8O3*)c?X->b؏Ș6FVdX02`|v7+!Uv'(C+4xȤ0Pt{6S[m«Ŏ #O",zs4 4IPh\ɿ>R܇0xJPodR`䮢Cwv߱ Ow܆!P{G,EK>h6q7L pY@ɤu" m'z$-"LI*AIk5;h2I%#Ce'$j {?"f òR:rb-7wyY?ch] }xx#*ebʉdUQbtI5ǜ5q{u'%7*zSBUA.K2ʹG 'll~+(zA+".z`9(Liqܔ|40[FKEV_VJ 'KCwJUv\fSN;9-`"Nm^}\2# Y>d.> >P{Q-8v#SrYDC:n+zs43 ѐ}7,f>`*𧑩X<(RYp>|YS@Xg F >b#bϛ殈$ d0Tm }Qz&\kϤ,+ c77{fG:`Wz[TCB(c6``OGZxƓp_>6t8dTUA&<XTH1kL &j*73őJ/?HWUQlW`#sE7PUs'kI0F]qV5(ױ$䙋2%"haϸMҌơw}PoRO;i|eNI[$mw*+~ 0|28Ipq]+0?ߓ(nnN//8$ Ժ̰ 㡅DW8IP 8TQd G5v'r!Tٓ!#?2&?^^<6C.O)0Pu:罜?9_ȻW[X< Yny1=3ecsGZecmű3UcPe 2./o~l24'Lrס\rl8iNw?]^n?ӳ[(~oTҕ2Ъ.oj~@Rj Fi}'y_B+" h+.OfkLJ 9_ y14jY|mՎ|_>|Z|_--8jBEG)]_Y[/X$NKԎ޽13Zdž7+hdOay.UQ~~ȭOhJDbn XJJ0l}ّdΌ5N?s7~ +zRGrt e%Gֲsm:] Iݮm]_jX*^H\ܜ}~<%I;0IR'I$E|RWě˳9AG]"E( " [\ WA| ue%(/\1K}7woQՕaz3tRLmJl:!>Egu%= A8ňq^'PґX('S;Jx.mىNMzw% Su!usQ[ͽ3Ӝ}HGm8&@Q@}BG܂{kQ@0CϔdIKUjD8C \5dp Q'ǒo1$M% ;liX/SӭJ.V Wp'U 1Wٹ1CTL"8l`sԘ'|s0(3$heaV2O?Q4ғELI~-tB{.%Bl~UTt-xHXӻ(!+W22i|RZwmmX~,/0Hz ww^mM˒Gߗ,VI*I%Uh$]$<8Y]-zI՜$FRį[sWW1EK tfuoEB0l4I/n>m3cHHv \ ~w\0.J(:'sp\Yf>E!E/ 4m×/o˶pvfjR%\* CI{018D00o4r!C*av'\$\!DWbJ1X)Dc+P HPBI3t0e(+*p|3لdR"i[,X,W[hNb#EHN0+I*`%2"֢ڢ0LlFx<85^<|?3<$$WyhHK0ϓ{aubms!:Tq6jdk~Mp xpWh{3na E4-0z B6sb}׌1laLn'#<8'Cp͝8Hqt*[̞}&ocx)l; VSXWM M.uzw]JZp 6x^c~\Ib$xa2*%|v<+dcFL{6_Q& 7 +) >* *.Xj0 )~B6Zf|u@,Unx5?`xz^L|X,> [bdx0O'}C8%yLx6հjO0 |^t%0*y$CԬ~9$f h ̀ s( ;pwS`rq`bMlq+roPlZ7mʑ\mEU t3:޽o@gNf ]&S։ ZrnU4Q6 c&J#.9J{Z` c4^HL 9FX*n~YNs>; ''KH+ENV{a0g56N;-)Դ>K^I),!"hH&:hS8hG Spz̖na/}h>M+ܺ=:iS_L?~k=p$iL()Zi&) Ն̤}s8,_;&Dk#aRplWyɠV;m52y =Q?.h<+cܷc@ԁw6miT s× JeSJ0lTV#b# JP"\LVspɯ`\=N_ۨZuF5qyD*Yo Sxz+P/Y<~6}bHyyCu?ĘfU??u܄:Tܯɋ![\15A>#?nn(O26/6.N$>ſwT`K%s׫΃nE}\=˥eѽ[\7B/kJr'ao/8pz+x,faq|;`:{z ϯn/j6p0bQ5bVLȠy ׬3^NbkKY/b^yB낯O!  >PqV)U*NP)aBi\k( $ (wnF/O8n6.ab@kc% v0g+yH #xXC6֞ڠ\L,c7x:QfFJ] -.pE.-D29nomϙO7YKo|Gk J@;@t7@ 55[ +: `B$[ _x O+ČzߴPgDIbmFV^8H ;Z(~]{,Ʈ mR:֙c()Fk.iJ C'"Nd:[<=HQ)KXkkL+W$ EqyP^3yHz/k3L=-y2߷JB00^(;`8\ b h,[L I) =70hGsR"!r"Q8 ʔCiT0!/_+30=KYN53A$AH E@(Ì*Tǒ Kעm%Q瑱>8hX%2Z[Aɂ&6qǙ D1P`P $0p,N#ECE9(>v"$QZ 1 V*x0d8 B8M4O>͉ md9 J{' l8ۼD//e/U9B!M|L⸷3-PZo%8{i(ֵfJ N&$Pci\!+fץ){N۾16~}|2[]{(o/UX6?}_-a_L!LZiGX`.`#X{hzp[Y7:7z?4ԏ=y2m r2} $/jBf&,@u:mRHJz&^^!֠us'‰t uGkeu0 oz3hl %l6<%Z obG+P?lAU@7Uu# I XRzI[JDn 24 {V&)ۧ{ 00T/k KWyмzaf2=S['[k?mT,D'V!/9AM.I7FNÔ6ꆴ޻3ڢtUwwOuD*Lc>m-sw]ŭvotIZ(~1BY+ a"iiܪXUY=pGfX,t{dhvN_gn@=RҴ}R2u_h|H\NvRKOm~ wlj}lv߉rx x|C𴰮VNh߾kx%={JII,DPpOV:Vwb{!-L坯Y!Է-nvt$H_&_&.L:Þ颍lU{lnHwkY4Tw>/ci63[xGGf#[ em>Z -2pd3̸Uy=wSa Vnor2#uƪshybs 9OgѕkF_$/Dz圠Ju^"ب,Wr-k%ldX z$T}IBɊ-jP,KD5Y2Xٛ%s6-_D—<1*4r+*"Y]uses||0p, q=2I?^.eHF'.Nz"}7Nc޸Dvj>;3P1{/*[}'R]mrli{*\"A-O{s7H."'Wh#_WfO9 8pB޺%NCN1p_(!A2^( #Bz`Z ,0胲IBlWe̤%$`n|"TFoZKw~#/b7={rg?[ikOO1H❡^Z=r?!bX2G%&•Z)(se"Xyn_aQw;:ټ1}y1kn߼57o^jgZ^:ټb{pϺCŜ͋'#7/l^,>ټX}ydbN7/l^,ıLhlDtH0 28 D@HE M< 3 N6/t(&k}yDdbOr5Œ͋^k͋ZHqJ犙󒞒%P 7|:Xw7b:CNX_1/K% 15UY3֣nOMH_'D"JHINAUKC%8`QT*O$ .:X .֕Y#]bb²~N+N&{c&6ͩ&yӸxlv9޼)yVCP,$|6Qv7[-zS Xdzcytٵc!\[ڶfz'nG\mYl[?׭4ʖ`DoIyd3ML~z%Pɖx{wy7fom)L NTKrE5D9k hU/ yBrRcl$1䶊knczIԖOHwؚl6dRa9dxWHlyȻV{wFjFrn^۲ ǂ6ˌVǂ4fU!,֭mmuc󍴼Dj>yqӇ?8~30:] G!rymyk;"ž> 9cyy(P[ͫch=-A%IikdsW8\k+CNVR` #XmN5Btq66-*OU/zBE߭ -VAhuZ_Lh/OXۜX\ܐ6ZcN.0Tͩ&Z 9J>ls6:$ͩQ; I=ls7!ɧwP-R-PgP}J5֖wQ$|ffi&b90$iP+drћ$럋xm Kvugg+luj lӳr߃(N-Aqr=kT]VKs/)N~ TKs@=֘rs1w` psv|5f5\c8WcfR\c5N-AIv|5hs1wi @<3e=!טO̙<3;\'IԘBj/טO,Ц}G1 ϐ5-q~|5fQdƜk]X"jck̹ܥ%H՘as9ט; [1Kss$jR#1z$5\cT}>Ԙ#,טsSK`L_.s9ט;}֘52טsKK扬S-05\cWcr9ט;i=a5fչƜkZK裫1x0j1s)&GWc&CY1s 83!s9ט-b];uz>^W%гq15#W'Q kFEtB)Fb<ՋhzfQ?7sx_G 5/ӂ_W,ƌ~e+W¿YU *<ȟM߿ Y^O~Ї?@$n``k㾄)HB1wNiP CtR2=gPb< Ry>O,*@Q>(eꍞ# _⿖V5{W-aov+Psh{jh+ 4_7~__׍N_ǹ) Յ_H%ۏG]BVs-GJqM1\a+2 bs‚>;T{ѸgOJ. 'C*P<?ߏǷ ,LciTQnxu9];?<:6@:pۥf2 >ۻ1^_ߚ9y}o?}FLyjSLP gJ!Ғ +Wt axApK 5kcBz h`*8aa)d"4X5AT RP 068"HЖrn`|!K_qt[>GrK|-#%=MFv?o-Lɏvk#~0iPD(>c˶ ]xEeϿ2" W^ Ud'?&(>SH͆lv~__CNwуO!7տotv7%`?G#~$p;]?Lat-5`|\bM0KПnHKYC" N˟.j}V`J~^bK`j1y6*_Ml IpQi<C{)!@bGn7*[ VQy=QuH7*FidQU3E XVУaL7dL!N0!Usȝ 6V?W"YiJjui2CsLk=`%!:)L犤(\ԛoZ"!Q'4+s)Xђ#Āap*X5NOp@]Z$=/1Թ"-!t({B$,I`1A_kT]Ŗ4z|FsZjc"o (hi`͡7kl|k!\n hw>@g/\%+|'>i_]y'@>MvQx~ik[e9tll '>cVgzTK|d7_?8Zl'3Z~ߧg}q,`."-1ׄepӝ3{O1JV Dk^ןꜾ?EF9Yi!$}29}5J3_֧3:9 Nun!o P6z7+*6W·CqtǐFX2mIxd"1FPIL!4e)v>O/yx}4#ߍR珤~jt퍶ZVu{FY܋ܧC]ooN07g 𙒠NϽ=7?^EdIG_3nSt:͵d*Q(]}4ߜ]u7dOQxvȁR|]c0 Ldߜ`w!td j6\ښgY) | "1;cPIp )E:ԩ E%mpA#,J1Bf^D]$QO ٙIEF瞫c2_̍`0zuΨ篆\yxmRV<#~q"}}X.K$Rfo8Bי:[$F#/DQs$񅒨$'yVq[ؿvO"$%\vʂpAT :X)';Jӎmy}VBgTJ9p^إM_͟_̿tv㞏>߮x c5=%k1)1zN+ӝQ@u>/7Ѓbh+b!S I˓TNw譂#P18[A>@mh&F[1뚀dq0!EyP3]fQ?8 f_zP5ܳZ&`eK]ҁgK>jşFo Q"h@4xͅEB+-uDD+TdL0EePSboxBxf~k#~ܧFݴ=(5:25Sli2DgJX/s_ f3IˇX8A3-U`s) @js JmQK%1lRs'P(`Snj" 젊V:cqd-9&%04o&dTp8ުEJQ͂:åpg}Y&1c i⌢2>A\YBjծ '!vsk!y8!q~dڷ0tvpCP y9HjT LUݙ_1Hס6e|viS&?Juz>]e"jtnq*R>|M~T̍2~^'b/7mM*wҐW%:Ű|inE5VKȖ1樒У1.,GKQS kАWE:[MYAꔾuqKUg֭X\քr-)8,>jNR]{EVD^zQ,6J̄!\0mg Z2Q9`DYB@,3&gQ_q{a A YSU@!_D![5ֿg=pByg_Sׄr-ҩJ{+ɶud֭)}G֣w9błZ&4䕫hN J=-&d|E!YwԊ]r%Eٱ4䕫:xj0ya"+:}wuҞ|}(Z=8Jzj#7'bjNW>MgXYiwϸT 9SxqԦ&H Qo9g%@1CvX+aC 3 Z(G`A;̕q2fǑOnK$7wՖ ;.}GӑgFfaiKLyZ{ I UVi.c=ᥝwc4vrqg7!iQNSjB{!pvPRi%WۣETcvPR)C%Βʽ)it(2YRY{XM%cYR7IK.K6ݔRM軋]̆OmxύqֻC*Bϕ)`|gYǟl,ٷe<>Ŝce'L"<8͖htآWx,R1⏃i\;rjG"zub7˧,DK{q~z ο][H0y0"籖SD8DGh#;Y0^xԔ*G T Aѫ1@Řo P,yx8'E33`AJRU(rB&pRZ}ٯ_cl2i9J U6 +St{:ٶmJ@eu֍!炵2&TwjV?^[WmVhF #)tqpӑݟmtF#1tٟ-T={o~)…,EԲ[TL]%nYWLfuj]e& ^O4a-ȨPvu*7QۭrOҕG$y|yĢrGKy"!E F'p'姹[GS*jB$x[^(x%.ƽ|? ߬| 5ǽJ]_]+H70t+ŷBPpIװٵIu-侣ҤW KI{w d) B: tcՎG`AB+؜<_1oIaYH_ nQn8:FY\?Oi2|HC av6{m(#%99/uLqiﰑ,e0J%^@ 'ďqAl.^^J_t6ZgmLOU,=u/?XdT<>j;a!k1BD)@ g/45J@'^EfPGD VK܎?q5(a6Hac&i"LP&~/MYo7M8X''HfqG귨 X#WM,Fyb4Aqb u.c|0O%88%Qޖ1H+I>}Dp@"vJ/WR_Z S,sرqE0OLo<<HO<^G-Rؽ;S;SY?wg?K8ׇV~Lp*ًc)Ph|Y6r(*S**S%bQYN^_$=ajAng>~^cg Ҝm{EM$ށTq TLeFQ!k]݀۶W0P)d%cOa&.>Jcq0x8 ͈VYhŝb", |bfׁ@r)^'RLل ĹhD#lm` x2hʹeć!:Pc eG,#Ř(+ndAXQ"#G3p~h9QXF ȁ^ (z=Me, ToqC L/5Q9P:JAle쎯&tjXLP"5zOlHg^aqZ[k3Ux i ҄TZQXÅ#_;_^)fsyNɗ79 Ftϼ4Yc% ":7D%:<@%0ĺӫ}Q޹j_p>v9k}[ÖgQ b `Cw5lI:6tjn*}mxSUp8PhMɁ|w;S[r+ZqAқ!7'E_ :#\p.T]ĆT,_x5Y쪅4!mT"ċ?!LrC"Pjl1`F*2&RL;rgk^ffGreRSy~9=8RNDŷu.aVhe1Ug@ G{[Ğ껷0(Ͽ} qE^n5y,V"e_b )ډJ͙'Yŵb PBT+lPFp,Bx%! H!DE4޳覑l+Zޙ@&NndnDĎS݀xHGSUSCvɜmqB*% IB$Cbo7uXad5ƛȖa&VTrUح^^ .0| YŅut>'kȤ\*֣S@ja%.L?`anjé_,>VG;,܂tPkr;Cu6u*I~r7iy  -kKCdmA"5[nTx=b׋BT qR-ڊ42ĻHD82 {UZrzL;ݤK9 bv?D_]|}ME4tN Ԃp65@dJQwUZ$ Dǻ:^k$]9#Y -[ k(J X Y隩 8RKذaR\V~E)9W0\6O1!Q G $RHH#LD2;C?DL KX;\ W ^(:`zI-DS1hDX8hzZ mU_e(7yKr['C_jɭW(OD'c$R[#D֧(Fk|3q\߮2*,REsFK9<5Tu'U0dͅ|ugϤ Iף,-Кr!;wSAusg7NLw]lLCSaV) +V+&ɷK -BDT!',"E\qx%oW lRm=VđvG166Ŏ!!{*i2VA]oN Ȩ+ϏUUJUY451" ˒1I\:\ {&Y"5JIƈ $zг/UT5YK}Y!l?m~np;-"fIsC"7刎O1VᲤ:^0꺲Z;NH9} yV{xR1P.YQՊ; ^Q%КKwZqKiBm{4t}U|09{>in;ە/>4t<|7hc w<%A@ 6V `ZJZ+gcS·Hrs9v>k: .uϯJ:KQgEt&i2(gkπ#F>k@(鏍 Q|XZ`W3#8&jp& @d5Q S>YZKIc3!,AO`u- zG,Q*qo 1HTfb麓/L<+L-yUiQ5G%Rμ @S,M[$N{`Lxh@6diNb8}PMLJRu.AkMg8`c+TVr]gtk\8XrFR\]a7%( K&Z(`rAy14O4\VwV9V ˚5r%xo+Uۚ%cZ*5Sv 'd.c;4^w L9V.'ݨԓc7oڀNɯfGa?rƇ<NmXZ4i6Ynqi&4SХ =wKTE!j3Gi@3EV=[ת;Y k'p_ɂ(P+x ֛/ɇK+1*' `􎷊-CHńrY,3j)s>!$Yj"t#"''7O>#gpӳ"Ť Ěh,)YF DQ⑇Пw$ UfLk ׇc%>aR{+yn$y?msK5g&%QMτL;}8~~rp^N ! 1Ї9Di|~d$ao%džǵN W'ibqpؚlb DW[Pc5Y3#[&g ;> ;>TFX":x x1ׄ:ryIa=g{Ԝ^/VIgci8ȹDZ6Hjed˱$$\K"'ϞRX˄rh}l!~ b9oGS;jbBqPV3?O o<>q^٘sUgEQcbNtIB, FEJk8SORxcg'ND$5Bmñc,$n1iq.<è|q~{{ՍRG* -Ka>aj1Ib,13xt0jMX TPm-ITEfC]O RoIL4UjT8\i|V٧OpȝD NN-=YBK|_H(ÇmO KSeA bQ;p:1K" ФfI/e4b,wœ" Z(%<FIAd"ep@hJCfgQ41uD m4Jw8*@RpK09(*+ap&X2)p&(%,0ҋp1q\bD n )宍3vGE+FEd0HF4ৱwv]̫l$Km/ߗґuGC#`0 g!fk,{qbS܃ѰLNjڝPX#F4z7wa$o0>vG,яQM3l&jJ 0`K(Ta g)76`p+"߸ ,׵)ot殦!i=#9Jϼ SSfx% QWySoUw>Kiqä ^Tr>f%d&I)Kܯ@n( Ez5!o`-~ndB^[ gU,vzXu}Uil͓<#4~[,6}uvG{_Vyd_O`<+ ӏ'ģ4uߋߞ/RQKO;JYO:|d㪉3:xNߩՁU8D31O_y D]gi#/?o w lw6o"1 v6G(>ecVV|[o􇭑پܮfRpNx= Yz!|بE`Eڜ[3D5 ~N8ܧh9J~5e[ъO(C:>!?!-rU)3k?dvmO⁔&9B^G\2>>YK3eS4 Ӥz"C 3x02}?d؟}{9x<4/&=s+L̓~CeeV1z^iG^xl>~3GNs3^ty˻//+V ,f)CeO2UX)jugoTB!utL1%#n ٝC/s}RUcvsW?Qu"5k^q}xd{`ɘ#|Lټޛ}^~{hJEr[]_43 8jr(ΣU51:SpdUҠR[ @ 5%:tUpVPefN2Ze,s,B9l\[2z{ސg$uAYM 3,:c `_y1tY/6,e(֮*ԅCp9"Y|Kt3Mރa%S"*.jR W=L, 8A#;PZ`43_ܦ20jPELak ,Q'5 4%6xL &L!`8n #Wm<Ņbik7Je*N5hc6Nxy QDiћ‚qM% UDi4DfB}\m)f^Da b8:˽VhfBniƽjkP5 ^k;*,)C,ccf^iJ; ڢX'l{@*&pI8}gO/~!LgE~9UZCMFxCOf;Tyx]^vL,z͋H X}jeVY5dV4_܎0x< nSo4y82R!;%NbUe{x,V&fA~ߞ mMxM J53HJ9-ccKkr,Z_'FJӗ<*Q<dz,Fg 9yJz4f=N.w˜?$c6vk7}Fz/xq- ,޴gU›-?g]wc3VȼSY[pm7{[WL_LzxBfEkh"2l&+˽EB]ة!+:<Y˄lJf_~/*ڽOG߄Jc}\&|S&ҽ)_Euܿ? Q2#sTy1 }\"x]f"eu"BO _|e‡|i|FhgjrW_ݯ!6eǯ?b]ku:++-r=Aa;o14RVҎ7ȧEF{.Y/^E1_z q,yi7I 9A`٣˖wVYF~U]2sQꬔ9eMcl43yl!ٞT k4Ft,t]t+ bL? hF-e$ *1nM)נ'> `T.eR@?HMrHJ29<2c|$&HG^54")0.hGY)? R苋|0v]Yz\GO …ь;0b 7~ %xPRpBr2RnE <ܑC"DPLO>@W3^"LJy,-%P"%yIFD qn^ p҈\9aD^Y* bQ 5Z#M%䢖(Ic-? V[> DAbO;nƐbx)[- տ:6:ᷟ?!/ٵއNDf7^/fyjv?ug>s>B0t2/~`W MjS$idg u}sϏ ğ~xxk j9_|lؾ}B^~;{yU5UeS|j^|C?(Ri(9=Xv9S6>k#$ʺ2c&FPPX (mQ!peCV4뒌b$W'{IpE(!\&'N,j( kɢVZYeIZePR?Djh1WȞ-}ɭ gRh>Z8VLG3؜&%7ګ!):*`U6AY-=G(BK4.MHBFQx KSS!|{Od ^-ÈC)'4'Zb;;Z:^sՀ3˕ߒs3&." QlO;b eh,A=(bG{JpI.zjdٟGI7LdLƴ=A)48^]ztxr)G#M Ia\K&8߼ &J |4N70B6p2QxdyzD?),P'H©ҠGԓnFHIB#/Х,uuek )+Y׾)ʃC^ܢ.=P[&:&%%HD8˺Yz]}|pEu廙ӟѾ#,ޭʅw|:,C{(iW|T{ bKu><-O D#@9E )XcdηwR= ے/q-b5h{%>FB)[㛲Bg@VB.cgRe#9F9n.28'Z 1w71-㡭\:!/u޳*[=}v#5=ޑV|N xR^Rbj<^JХ Ҭܡ8gڈ~^ӽ> /_h!ó5#m=5wN"M(@ޯ<}s|f2 n({Atׄ z&4/aMVI 5-ghy!*w8Et"\i9JރӸg) 70X-rY|z$jfބ#BV:җ8=ؐYЄ9ϭ,i6O5C L ]dm|K3Cp G !DIQrONa3N1A2Kвp(G#v4PL+\:mN~5:f^)yծ]jZӨhM8!RaȪ_jѐUָ#㛿y}l>`60_2SH`2B1$$01BAr/ 0$F>#ŀE/p6}Φ/ &JFWdΦ=Xԕ@.ґ V}2 aڂIph MUN!Fs$Pr%$P;U hFxNA8u93; 7Pn6Vj T9ʼnĩ.g8a,aNO(k4/$?4X!`Rƍ3Lt+7q3wc3%mYk`(_Kc!c(P(qJO 8$$,U#DAȃ  \iRkn}f_ն=Du ^AhwQp "RA g!2Cd2=ADaK@ ȭ/KĮ \!y[GP k%<:gOL1܋ +x?i^#*2[S *v]l5 Ue_|fK',&#\7hC =in}qfh:uZfV篫ZKyԯ^7+{sԠ5f`5~hOHA[UVs<т bQ[$} vyz {z=KJɵs_c:{u+޽;Qƣ[)nѮmu}/$Iw5^-7Li0gH4V\Za  L]sSLἼ ڔPo%\UgԁZ/}Z(*aRVOAd nn%M6B 4uj~V.PkݪN[nmtkM_DߡiVQke\J]5qcƸFŨq^9ѓbYdk=)!5)TJfV߫?{(BV$mX>"]iP6vZ*2`H@Q*5/ і?\`RQP&ZIRT)2Y7MS1n%ҍr9:@ M5*vrj.J VK),,>];Ͱ$,ZQ*e YCm6 YKpeVKPH۠`eKZ]$lPCH @qm:@ ԽPBˆB¤uZ |] kZ̬%86(X)u!JNaUS*Ip SS%8ŗ}\c%"sUh_rQR0.We[ȺM9 9-@ZW)$l7Z0śC&ϼ1(ZswWA"3(Ԝ7PBBk@2 j !.jЧGCN% 9eHڧdH -3ȐЂ{ݶ{{ww.-YO IH'}d Hg m'By3x^rW6&?KOtS  PNDh=h1N7~|ЂPO*nhݰgC:p.R'尷2{6gSr"涣Cxh~cp ]qJx])lu<,,L-ouq z^M|$.foY|u\"L Lil0$\?Hx|>_yLetw?CRb[xp}½E" 2ms',@X(%>| M-Ӵ寀1n ??=HܑQ Ƙ#2:?fD |%;K9*Ԍ|ĐF)\V9-T/n>a L7Kĝ=1\H(+%p{1Z;ft&%H}0]/ a4@&"NKHirKI~rB= Wi(RS $z-ZuAn~~?=^4pѯk@Kz{}JbaȦ\~o/WGuXߙex}\Mym!40AdZ&aU:ǽ/k|@D0 \<2WoVfONi)XDI@)(0*oDB([\% |K(ogBq]ULM [}3"~mF2d+'3ߛ`58t!f7b(6??ҕ})O|$j0aEE_CK]yo.uaQx4=aPsɯ>ið'2XݔU=Tc:gKw\UEs?U wWE}mAz׈yL!ʎ"(#+J +dGJP+2r"U^y %j,%QaزY Pj#+1"!cnՄ!)'U(*!'IAɀ0*|*"} aza!`<E$aLb!"J#qLp]RPᐲׄ ($z,E~Џ"`%+&N؀˪;ITH~z$#[-re/\?IG'f|9{F_^! A&7~y{' 9pᐢ!Cx?wCn Y-J!y7 wf9znW.[Mufۮ'*Cr 9FctG1;>^uK'Hzq^qQOfl8q kvU4k㑕C *.9Uʣ"ðTcjxH! 6G >zSkh[΀nls@K <._y;\$ e4O_)W{>*ùझ\]X0}G`zKq7Q箄0uO#pɡV~RS-hh# xZQJ ;Z ̗Ml}D-|4~PsMęSvYʡ!a>8=:63@#l"I6hj݇9msUC㵻#v4jZNvnhڱH,I swr9C QgG#B xW2~W=hRIK]?)aJ5];:D}wS}۸f䳇Ý\}$9Fp;Ե_ )hxg%<BKèKx G@_[Ko+)82➀ٻƍ$WzC@%̈́WGx()K}xbfRڴ#l*YWudVf Qb4IWKUY~, TD`P(Bf8Eơ6D2j !ZZYC FUR0ec*U}bq4,ȺL*7/R/Ze >gY֛x{~ k-DR>?M&P:88$ 4w]mhpfo6h.r75`ori2»Mx@ė:rh (%"%&csjo9.ϦlOx+wQxfTSYxJ),< uUm+]kNF&-Iy牃W(ߗ;CobҒݡHbt˕{=*EKW fxEԛ rZ| (_>Hi «4%J>AH+<:XA!,`@2= 4oym!fzu{ZSUyrX/|r`a= mSXe[7Go7LŊHB~+Jbh [wN~1za2 u(N㞔sG|q:GiZEC]!eTRGwyh=39zTTZ3R16M{/!i^8ǩa]xk70=H֝_|BiFCeT4 i:VY,~~nZH A_}P\ ].XM^r%飷\ݭ_]l~}j_?U XXNMQav6\Gy#u(p"fE:N #tas>r66?l}30A%EXm(?k]ӗY3H>zc\ï))4HjQAh-4. = ?[[E678=]μ|fUC/ F'WW{Rیջw.y K)d7 ӅKNĔjFL`/X'}A'5PUə1~̆OsEݼhpsa!=ٷA)l LP0!"1@>6w9\#bdƒ X߸c6[>˛̥+i _\]؁ihƾ5:oN/^|ȎώwˣJ>5lUDulWhJ%ע?ɹoyXT§cuZo,ygwʔ蚲S*PV1츏v8KvzWHޱ|AM$jU";K(ƍ婑8 ^!ua;ϫ!ԵU,Z2fSMU/TLiwUJBʄrV-o,TO}Ti=ItyId}W#Xj \3<1Z[k+N[5;*k1sއA%ΛQ[lNgmոƜyiHKiD_Oj)juzɏh؝c#uw[sS lKL 3iCp|4G01CgoDcID[9;o`O䃨v/p漶¢9N˜h7y8EtFܴ&YD:"4X_o%$h7YI٥DJi;7]1'݇oh$C!Id284K2B QFPg)*a $FYD[x"11E0&GZHk${Pģ*X`(;V9TzĄiYuTGo ^^>Yg|}"7<%>zMRp-JIӏ?xo4 @@–ww~ D Uʑ׀ yɧN:4o R4ѷy{k\, !nX<5n˭m"/ 4@,૛x\ũtMcpެv}Rz;u*Y&]^?Ky5}{QM$T $T0K\]նy?eݼhuYuy8ղ+ˀ{$q2P2X4+IR}ޤrK+:c?j޳aK2=]huvX $輽=ӖAB:];]feރe@KԢ I g] {"w^Adw{n"Фnv#e4_.wn*]x0ǫ_W#1|R*B!P b(GWsyPta5xJD@5gcY·3 ^O04=…pktV^+Lc=o~b~+JUwD/It*,5ru2Fз~{ZHlHZ9==i_A^4Oj4H w]G|ּ':=['ӗy=~=$K85L0EHMpo9x'Tf+:Tv<>AeKRYkՊ-r@l(N'FP38urm *[2:G[ʼn74drc *$53> n,B[t,pmBq}ck|kVE  )S1A"rKR 5cf[)3`"u>⯣5R[畀,f~@6~BXE'V:RBa~T &< wRtw'/J@bvt[] VHRg6OY>Lޯ*X,,nqpꦃ0;Q?.?#ͼuߑK:Gx}"bЅ :Ksa{اM`vL_f= ɇSqZrfF_|>"E@ZBCh)d[ }oٌ"ތct9C!W ,Zd{Rیջw':ɥĘ@~6yj4Iy4$z%͏ ̴C6ʶUf136'Q;-#X3Ƌɳ2Kj-7]ez ,C΂+L%qڣb3NFfvm$#r'˅d3KO?F߮kL(^&˺س:}`: Kl&u0f!Cb!}%Gq$B f,'}/4O.d5N/9c0QdDo <,8W QO f./3QY!Ζfb>ca̅'aul j,OЋNq(nFYlWePaĞοabCD!B&'Vӈ{ly1|mMLU7l96/o2囪iN侸*u؛Zmw ;wr .D1fĄi%ZenD'!'a#3aÈ6[l뻦π+\,-B6׎Xwd=ك*5+Ёw^K%;tf #Wlªr4n8WjpUϖ Abx;9B^ַ%s~ \Vvہ[fI5* mI#9"a"Ӕ}3’r' )}:@kw)VZ} RCm2E kD0 3)XpHIJtȍ 5(ւ+j`D"(U0IbP<\T$Xm 6&0HiDXVqTD ܇1\!~Oƅe0s{ Vhπ Po]K_~Gn(r@#?5zAADŹ_7^2/!Y}qVZ o=^]j|I`AF8E;8_jX@WUqq f4Z_x ӱ?dEnR0uf_{u4>6$ȀęBuIhtfn*!TG|ʈ &I}<`| ,ZB%*?oٵ=c/G>/i24p$P dL:OMո(^-Y8#[{sHXN^|k]"ń ~jHv==& qrPՃ8BYlPH(EΞPFc1b4(6…*61]zUlx}Y|Y3ph%V=8B v(iJ :l*RK6 `h'9 RH$R@!&D%b( qeR"$:"XHKcL"5A-EWaGbImB#lU")"l@l D0-j bKqva5thvvdn>ζ UuPm{sJu>0.Ăb._,5!ʗ6ا2B%ԡ=)! .2o2l"NEwqW$cU JRf(:.b;[\SX5V>4,ZlrKsyGgZ$k:lN+t>^+.<ۥJpCܤ} ekyyX&6;q3}3 |!Uj}t~XE>h$:ajy}=w W0gKȱx=9I©L6A6T@HAtpKb'~ApDb!5y[fnۖAxm4BJ J*4 i*YILR ׈I: F [ܮɮLQ))N>{((B}e0<8_xX]C$e%O5 '罒Q,Md!Uh?¡JDF}kZ-e~#"PiN!J(R:1ҢH8TE$! wmH_kb\8$7]n_?R,9l%Y0ǖYփx7vycBoT R4gKO ։F@xb1sKLjL] Ya#Q^NkP Ѭӗ+De|]$|Rh )2FIe @fqƆ͈%7!Y}\H,T*LA7]L}RyG)Xg+:IZ-_b~ipW6607*4 øl<pu.Ku}Swz6{sS8Pʗ򉭹Nq1q|$+Z{)h^]?}a3p0#kX{rRL 7n尻4OvM^  bcZ h4Rrnr.>=*_FC9ITpPs?!! D)e0^&d qorv7pQ뷪{>IDqAWLΚGKG}To|A$4^ET*!װ/zGi ƭo?bV(QJ҇|Q3p |~YӦv,n^@ |-y%rBHǍ2WW6.Zd,(aQ UH,#gCR!7Ɗ뺬:&JSY_f0B|' ladsg˹]M?ϿbQs˂&YfU,߽j11x#`/AG"^ k|qu4 ~ɲ|S֋uAx ˪~41jm5l7V, ݐP Ls,96* wY 6WaUR|LpZqVawLYhy$o,VɷlvvuX^WqRq]^ꛉײRډ`>$ˠ o"δlXts~>;w+ K 4ĸ6_aQEZl~k P0ρ2D\-/JnKW:K`to@!5Jj \eBS`.RꂯRYQ0eN'?%d4(%D,YJKpWQ]U]|%kt k2Zz^#ͪ '-xFURp EWbZ,D7dXc! 0zmPҜU*b4OZq0fq*f SB4>B C q ޮ{f/VDd]=b!&U2YyHP F@b+X k?o?ml{vV3O vpT}mpV1žgg#>kYl22-`Ib9 WMH'ʴ3de֨8ml,x˜S^C%eBDk}0F/g '4_nPMBDhA -x!"BZқXֹ6*lWF5j#ā3:Qٌ AJp`ÄlsSle⩺1\XkԂ % [0AH}Xb"0ˆ}5:#hƪJ%мb0XWZC+Jjb$%񕘞Ӥ񅭵*@KWXm:֦P;hoIYVl`Nkm5Z`0;hK gy{f"5 u;v.a><u iCѸgy{˒ R\[HT1snג C1Fcrazu|\U&:p't6\poÂ̷ mۛ}^FR[dI|Ո 49 SQs; *TuY;"[SZ2qi%W96 C=6R[: 2VM;.B(n%즾UbH-d{squ\K L$ep\d)pU1`*pUr*Sf`qxuJ=vA"nF;ZB*5qPh̶thl 9r)OT{`ry:hcvp[>LuBC\EWt*A`\٬@Bٚ{J/ ֕eFUܗf`A*m/Q2NZa&8$UE(D J:K:#!c@}V_ ˆQ-!P2b!QR5"Px3#0c#}zPY PJkn Vއ/QbT(M\JQ<3_'+Me 'j, 9=Hxw; "S1]spv)Og׊x()]GLJhD3W7#.ąuNLbCB+IɅvL> )yߍ !vΣʥ%jo@Q(nr.&P=zBbgˋyΈx*AbbYx|Bٞ" `]R,ȝ.HhaaWߚR HA(sgHH7i0ȅ:%I3f11L˖yA(K `{ۂh_X3"P+^q9?G V<@>ƿb[6%jb<;? kgQN~-w{ $I4ܓ^>]XZv2:dj+t(Ѫ,M)j(Y)"xx;~Aes_v,+EѺ˲[- <_6#^_^:7o&eK6=Og#iкG3z9@ͻ }N肦@ EMKW2O͓KjCҘb^{O!Z `P\@GIJD.Z^/Q{SYN' H+4U^ԅq,g U5L0 )% y QqҤK_W Ѯd+)0N#84Җ>TL`9Qp,Jd'lC2R޷6EV4zt[Pf ~<čb"]R.3m l (UUXRףy 2hɌ{z;e0DC.kD%K$ 687vM%Ё`KY,cАkޔJ;6CreE5[M<ڰQ)z젍jg75X p㫉:8qa#daYFwI%n'RLD{>kZP0.E;رHWJt%Z Nc(% j,|Ni@oN6Ǣ͊KkGm~^=eD4{OoG]-Y?Q3Ȉqqry{w\v3d [Q6TH[ QUr}6k搚B U2AHTX~O6^րdQc%*HTGRDba̤ȎX&YN*gJ> RCGrV@T}!_ach}@<ܯWhey7 64āgn6{b+6M\qh (qP?~#J3~'CĢa$Vֻ0/ްC_gU;: VӍkζ?oWJ$q|7o~?Xz⭲ S$30 $Ϭ%Lh\Z4O{-0 UoDK?G:釫_/^v޸?\j!l\5zyǣoO.@ɿOϳmdtMMP{AovS[rv ;0W87i^ ߇t*|9z4{]hFӃ[xΒs[Kb"Q\ (I), fCC6Cv{cIɕU~޹Կ  `V]մϵ%! .,,H޵6r#"m0n d9)ْ6"=nnIؖsfc` ՂFs?pB`uqB_w柿== i$CEqw338ȉNIu ɻ鷡 צ)M&Mn9m0&:__<.]>-`:]O~Z'b=`Wmſ6` . /Jh)OJ/jhUNeڹ<%>S6iec%?z7A #jj( Hy-”:zvK9hR@t!2/Sʋ3y--=&zJOJy1hSzwJIZ7cnc CtS ^*k2kW Nг^N˛s::<8IaQ[; fa{w2s.G]TKЭ K P).o? ԥW.e*Ԧ2$JD82!=ö́2}Ǟw_7ŧJ8Šjie=:I")-8U{Si*S4,YWQP –1>J#LAgʌ>(dr]%ؔ<f";vsXBH+" 6hr:ֺ]V3M(,VL cuWcWRA~Kϔq+N5OI\2 ښT/\Yy;&*/L( eU6謔R/$! .ɍ)*}Pt[֫U}7|R܊eqKb^YAδ8̓*&{jI5I ve?ۓ,G/Ww%FZ2YSn:oV}3L*y9RYPwiX~x r^){|@h(GZ,~2\Ա\߼p&f#Oo Y妐%rQʑIT 4=-]WgXȃ41e2I2y晌d!%∝$;LuSR->l|:o|DuD3D?r&Knѭ  pd2r7M2 &@q ?`o?E!-Yi;\@SsA7ܛOX# l? ƒyqM=̓>,xYSr3EMݜJX=+8I11Lf/~5N[ )aODfZҷS{&Z7[<e%!x^J֘|Y9jÜ/0+Rr`;DdV^{-sHș> CKj [hW-LVt9}qw(r Frm#\ix0a<DPX2mvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003335346215147167677017733 0ustar rootrootFeb 24 00:05:41 crc systemd[1]: Starting Kubernetes Kubelet... Feb 24 00:05:41 crc restorecon[4691]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:41 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 24 00:05:42 crc restorecon[4691]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 24 00:05:43 crc kubenswrapper[4756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:05:43 crc kubenswrapper[4756]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 24 00:05:43 crc kubenswrapper[4756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:05:43 crc kubenswrapper[4756]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:05:43 crc kubenswrapper[4756]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 24 00:05:43 crc kubenswrapper[4756]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.547146 4756 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556783 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556814 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556823 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556833 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556841 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556850 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556859 4756 feature_gate.go:330] unrecognized feature gate: Example Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556867 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556880 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556890 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556898 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556906 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556915 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556924 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556932 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556940 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556948 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556965 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556975 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556983 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.556993 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557002 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557010 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557018 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557026 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557034 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557042 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557050 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557058 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557090 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557099 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557107 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557115 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557125 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557133 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557161 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557171 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557181 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557190 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557198 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557206 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557215 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557222 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557231 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557239 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557247 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557256 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557264 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557271 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557279 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557287 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557295 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557302 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557310 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557317 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557325 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557332 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557340 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557358 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557367 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557376 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557384 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557393 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557402 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557409 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557419 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557429 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557437 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557447 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557455 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.557463 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557588 4756 flags.go:64] FLAG: --address="0.0.0.0" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557603 4756 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557621 4756 flags.go:64] FLAG: --anonymous-auth="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557633 4756 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557644 4756 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557653 4756 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557665 4756 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557675 4756 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557685 4756 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557694 4756 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557703 4756 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557713 4756 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557722 4756 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557731 4756 flags.go:64] FLAG: --cgroup-root="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557739 4756 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557750 4756 flags.go:64] FLAG: --client-ca-file="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557758 4756 flags.go:64] FLAG: --cloud-config="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557767 4756 flags.go:64] FLAG: --cloud-provider="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557776 4756 flags.go:64] FLAG: --cluster-dns="[]" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557793 4756 flags.go:64] FLAG: --cluster-domain="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557801 4756 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557811 4756 flags.go:64] FLAG: --config-dir="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557819 4756 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557829 4756 flags.go:64] FLAG: --container-log-max-files="5" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557840 4756 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557850 4756 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557859 4756 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557868 4756 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557877 4756 flags.go:64] FLAG: --contention-profiling="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557886 4756 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557897 4756 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557907 4756 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557916 4756 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557926 4756 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557936 4756 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557945 4756 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557954 4756 flags.go:64] FLAG: --enable-load-reader="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557963 4756 flags.go:64] FLAG: --enable-server="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557972 4756 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557985 4756 flags.go:64] FLAG: --event-burst="100" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.557994 4756 flags.go:64] FLAG: --event-qps="50" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558003 4756 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558013 4756 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558022 4756 flags.go:64] FLAG: --eviction-hard="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558033 4756 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558042 4756 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558051 4756 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558088 4756 flags.go:64] FLAG: --eviction-soft="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558098 4756 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558107 4756 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558116 4756 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558127 4756 flags.go:64] FLAG: --experimental-mounter-path="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558136 4756 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558145 4756 flags.go:64] FLAG: --fail-swap-on="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558154 4756 flags.go:64] FLAG: --feature-gates="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558165 4756 flags.go:64] FLAG: --file-check-frequency="20s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558174 4756 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558183 4756 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558193 4756 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558202 4756 flags.go:64] FLAG: --healthz-port="10248" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558211 4756 flags.go:64] FLAG: --help="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558220 4756 flags.go:64] FLAG: --hostname-override="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558228 4756 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558237 4756 flags.go:64] FLAG: --http-check-frequency="20s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558246 4756 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558255 4756 flags.go:64] FLAG: --image-credential-provider-config="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558263 4756 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558272 4756 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558281 4756 flags.go:64] FLAG: --image-service-endpoint="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558290 4756 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558299 4756 flags.go:64] FLAG: --kube-api-burst="100" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558308 4756 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558317 4756 flags.go:64] FLAG: --kube-api-qps="50" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558326 4756 flags.go:64] FLAG: --kube-reserved="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558336 4756 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558345 4756 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558355 4756 flags.go:64] FLAG: --kubelet-cgroups="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558363 4756 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558372 4756 flags.go:64] FLAG: --lock-file="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558380 4756 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558389 4756 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558398 4756 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558412 4756 flags.go:64] FLAG: --log-json-split-stream="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558421 4756 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558430 4756 flags.go:64] FLAG: --log-text-split-stream="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558439 4756 flags.go:64] FLAG: --logging-format="text" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558448 4756 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558457 4756 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558465 4756 flags.go:64] FLAG: --manifest-url="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558474 4756 flags.go:64] FLAG: --manifest-url-header="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558485 4756 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558495 4756 flags.go:64] FLAG: --max-open-files="1000000" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558505 4756 flags.go:64] FLAG: --max-pods="110" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558515 4756 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558523 4756 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558532 4756 flags.go:64] FLAG: --memory-manager-policy="None" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558541 4756 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558550 4756 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558560 4756 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558569 4756 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558587 4756 flags.go:64] FLAG: --node-status-max-images="50" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558595 4756 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558605 4756 flags.go:64] FLAG: --oom-score-adj="-999" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558614 4756 flags.go:64] FLAG: --pod-cidr="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558623 4756 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558655 4756 flags.go:64] FLAG: --pod-manifest-path="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558664 4756 flags.go:64] FLAG: --pod-max-pids="-1" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558673 4756 flags.go:64] FLAG: --pods-per-core="0" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558681 4756 flags.go:64] FLAG: --port="10250" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558690 4756 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.558780 4756 flags.go:64] FLAG: --provider-id="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559709 4756 flags.go:64] FLAG: --qos-reserved="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559736 4756 flags.go:64] FLAG: --read-only-port="10255" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559749 4756 flags.go:64] FLAG: --register-node="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559763 4756 flags.go:64] FLAG: --register-schedulable="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559775 4756 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559802 4756 flags.go:64] FLAG: --registry-burst="10" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559816 4756 flags.go:64] FLAG: --registry-qps="5" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559840 4756 flags.go:64] FLAG: --reserved-cpus="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559852 4756 flags.go:64] FLAG: --reserved-memory="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559927 4756 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559939 4756 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559950 4756 flags.go:64] FLAG: --rotate-certificates="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559961 4756 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559972 4756 flags.go:64] FLAG: --runonce="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559984 4756 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.559994 4756 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560014 4756 flags.go:64] FLAG: --seccomp-default="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560026 4756 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560037 4756 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560049 4756 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560090 4756 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560103 4756 flags.go:64] FLAG: --storage-driver-password="root" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560114 4756 flags.go:64] FLAG: --storage-driver-secure="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560126 4756 flags.go:64] FLAG: --storage-driver-table="stats" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560137 4756 flags.go:64] FLAG: --storage-driver-user="root" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560157 4756 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560169 4756 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560183 4756 flags.go:64] FLAG: --system-cgroups="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560196 4756 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560293 4756 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560305 4756 flags.go:64] FLAG: --tls-cert-file="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560328 4756 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560343 4756 flags.go:64] FLAG: --tls-min-version="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560354 4756 flags.go:64] FLAG: --tls-private-key-file="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560366 4756 flags.go:64] FLAG: --topology-manager-policy="none" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560380 4756 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560391 4756 flags.go:64] FLAG: --topology-manager-scope="container" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560406 4756 flags.go:64] FLAG: --v="2" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560421 4756 flags.go:64] FLAG: --version="false" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560443 4756 flags.go:64] FLAG: --vmodule="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560457 4756 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.560469 4756 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561530 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561563 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561574 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561585 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561595 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561605 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561613 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561622 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561632 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561642 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561651 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561659 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561667 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561675 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561683 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561691 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561699 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561707 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561715 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561723 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561730 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561739 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561746 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561754 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561762 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561770 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561778 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561786 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561795 4756 feature_gate.go:330] unrecognized feature gate: Example Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561803 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561811 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561819 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561827 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561836 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561843 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561852 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561860 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561867 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561875 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561883 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561891 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561898 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561906 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561916 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561929 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561939 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561948 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561957 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561966 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561975 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561983 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.561992 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562001 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562009 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562016 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562024 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562032 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562042 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562051 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562102 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562118 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562131 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562141 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562151 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562160 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562168 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562177 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562185 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562194 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562203 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.562211 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.562240 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.579933 4756 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.579982 4756 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580140 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580158 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580171 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580180 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580189 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580198 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580209 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580220 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580231 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580240 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580248 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580258 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580267 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580275 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580283 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580291 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580299 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580308 4756 feature_gate.go:330] unrecognized feature gate: Example Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580315 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580324 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580332 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580340 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580348 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580356 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580367 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580379 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580389 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580398 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580409 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580420 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580429 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580438 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580448 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580456 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580466 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580475 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580484 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580492 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580500 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580507 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580516 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580525 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580533 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580541 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580549 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580557 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580566 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580574 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580585 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580595 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580604 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580613 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580621 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580632 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580641 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580651 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580659 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580667 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580677 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580685 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580695 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580703 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580711 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580719 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580727 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580735 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580746 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580754 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580762 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580770 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.580778 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.580791 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581010 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581022 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581031 4756 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581039 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581048 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581056 4756 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581093 4756 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581105 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581115 4756 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581125 4756 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581137 4756 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581147 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581158 4756 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581166 4756 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581175 4756 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581182 4756 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581190 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581198 4756 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581207 4756 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581215 4756 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581224 4756 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581233 4756 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581242 4756 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581250 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581258 4756 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581265 4756 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581273 4756 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581281 4756 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581289 4756 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581297 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581308 4756 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581315 4756 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581323 4756 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581331 4756 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581339 4756 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581349 4756 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581359 4756 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581367 4756 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581377 4756 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581385 4756 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581393 4756 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581401 4756 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581409 4756 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581417 4756 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581425 4756 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581432 4756 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581440 4756 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581448 4756 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581456 4756 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581463 4756 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581471 4756 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581478 4756 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581487 4756 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581497 4756 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581506 4756 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581513 4756 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581521 4756 feature_gate.go:330] unrecognized feature gate: Example Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581529 4756 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581539 4756 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581548 4756 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581556 4756 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581564 4756 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581571 4756 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581582 4756 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581591 4756 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581599 4756 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581610 4756 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581618 4756 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581626 4756 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581634 4756 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.581642 4756 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.581653 4756 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.583512 4756 server.go:940] "Client rotation is on, will bootstrap in background" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.592124 4756 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.592278 4756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.594526 4756 server.go:997] "Starting client certificate rotation" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.594580 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.594844 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-14 07:56:05.940592134 +0000 UTC Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.594949 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.622145 4756 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.624354 4756 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.625129 4756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.646621 4756 log.go:25] "Validated CRI v1 runtime API" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.687668 4756 log.go:25] "Validated CRI v1 image API" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.690735 4756 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.696556 4756 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-24-00-01-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.696610 4756 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.726601 4756 manager.go:217] Machine: {Timestamp:2026-02-24 00:05:43.723115721 +0000 UTC m=+0.633978454 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:47ce3691-1d61-4b4a-a860-22c7e0dded9b BootID:75eca80e-f61e-4b17-a785-e3e58909daf6 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:8f:9c:82 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:8f:9c:82 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f9:0b:1b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1c:c5:36 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:3f:56:9f Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f6:2b:9c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5a:d7:4b:f3:e7:8f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:31:2c:b7:e0:68 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.727107 4756 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.727304 4756 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.727737 4756 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.728021 4756 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.728104 4756 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.729307 4756 topology_manager.go:138] "Creating topology manager with none policy" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.729340 4756 container_manager_linux.go:303] "Creating device plugin manager" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.730287 4756 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.730330 4756 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.731375 4756 state_mem.go:36] "Initialized new in-memory state store" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.731524 4756 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.735549 4756 kubelet.go:418] "Attempting to sync node with API server" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.735583 4756 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.735625 4756 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.735646 4756 kubelet.go:324] "Adding apiserver pod source" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.735665 4756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.741557 4756 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.743163 4756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.744953 4756 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.745487 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.745484 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.745657 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.745590 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.746894 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.746929 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.746939 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.746949 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.746967 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.746976 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.746985 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.747000 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.747010 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.747021 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.747078 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.747089 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.749579 4756 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.750135 4756 server.go:1280] "Started kubelet" Feb 24 00:05:43 crc systemd[1]: Started Kubernetes Kubelet. Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.753363 4756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.752332 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.753403 4756 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.753938 4756 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.762378 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.762449 4756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.763018 4756 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.763192 4756 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.763299 4756 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.763810 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 06:17:59.16098423 +0000 UTC Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.766910 4756 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.767447 4756 server.go:460] "Adding debug handlers to kubelet server" Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.768246 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.768281 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.768474 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.771559 4756 factory.go:55] Registering systemd factory Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.773424 4756 factory.go:221] Registration of the systemd container factory successfully Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.768372 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189705f89dbe3053 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:43.750094931 +0000 UTC m=+0.660957584,LastTimestamp:2026-02-24 00:05:43.750094931 +0000 UTC m=+0.660957584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.774828 4756 factory.go:153] Registering CRI-O factory Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.774890 4756 factory.go:221] Registration of the crio container factory successfully Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.775121 4756 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.775179 4756 factory.go:103] Registering Raw factory Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.775225 4756 manager.go:1196] Started watching for new ooms in manager Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.776082 4756 manager.go:319] Starting recovery of all containers Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.783917 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784000 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784023 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784042 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784058 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784107 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784124 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784140 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784161 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784178 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784197 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784213 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784230 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784248 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784265 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784283 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784298 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784315 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784333 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784353 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784373 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784423 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784440 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784460 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784477 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784495 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784516 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784534 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784551 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784569 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784593 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784611 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784628 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784645 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784661 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784679 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784696 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784714 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784733 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784758 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784776 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784794 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784813 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784830 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784847 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784872 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784889 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784910 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784926 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784943 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784959 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784974 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.784999 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785016 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785037 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785056 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785096 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785115 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785135 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785152 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785169 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785186 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785202 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785221 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785239 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785254 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785274 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785292 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785309 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785325 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785344 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785360 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785380 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785397 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785415 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785433 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785448 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785462 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785478 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785495 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785512 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785528 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785546 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785561 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785578 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785596 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785613 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785630 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785646 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785660 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785677 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785692 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.785708 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.789996 4756 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790118 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790174 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790400 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790442 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790523 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790617 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790682 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790726 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790771 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790819 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790870 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790933 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.790993 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791029 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791091 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791129 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791174 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791233 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791270 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791301 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791339 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791368 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791396 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791434 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791465 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791509 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791543 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791572 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791603 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791632 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791662 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791690 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791718 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791746 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791773 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791803 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791832 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791859 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791885 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791926 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791957 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.791986 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792012 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792039 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792097 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792132 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792160 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792188 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792216 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792248 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792276 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792302 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792328 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792358 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792430 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792515 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792545 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792571 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792599 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792626 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792651 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792678 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792705 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792735 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792762 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792891 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792919 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792948 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.792988 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.793025 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.793057 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.793118 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.793150 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.793224 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.793254 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.793285 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.794149 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.795763 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.795825 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.795855 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.795914 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.796291 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.796327 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.796375 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.796395 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.796463 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.796482 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.796719 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.796840 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797001 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797022 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797081 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797103 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797146 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797164 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797188 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797232 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797296 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797339 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797356 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797372 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797432 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797480 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797500 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797519 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797567 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797585 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797629 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797647 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797665 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797755 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797773 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797847 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.797979 4756 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.798004 4756 reconstruct.go:97] "Volume reconstruction finished" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.798019 4756 reconciler.go:26] "Reconciler: start to sync state" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.810886 4756 manager.go:324] Recovery completed Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.830004 4756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.830496 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.831659 4756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.831774 4756 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.831875 4756 kubelet.go:2335] "Starting kubelet main sync loop" Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.832092 4756 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.832867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.832912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.832926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: W0224 00:05:43.834131 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.834216 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.834363 4756 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.834470 4756 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.834562 4756 state_mem.go:36] "Initialized new in-memory state store" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.855206 4756 policy_none.go:49] "None policy: Start" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.859991 4756 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.860197 4756 state_mem.go:35] "Initializing new in-memory state store" Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.867343 4756 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.922420 4756 manager.go:334] "Starting Device Plugin manager" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.922501 4756 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.922519 4756 server.go:79] "Starting device plugin registration server" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.923153 4756 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.923508 4756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.923718 4756 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.923966 4756 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.924520 4756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.932871 4756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.932978 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.935584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.935635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.935659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.935960 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.937089 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.937197 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.938234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.938284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.938302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.938564 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.938883 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.938965 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.938951 4756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.939648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.939689 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.939721 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.939920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.939971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.939987 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.940036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.940058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.940093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.940388 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.940475 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.940579 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.941280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.941325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.941344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.941547 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.941678 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.941744 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.942046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.942124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.942157 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.943008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.943050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.943109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.943053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.943217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.943244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.943321 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.943365 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.944635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.944670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:43 crc kubenswrapper[4756]: I0224 00:05:43.944692 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:43 crc kubenswrapper[4756]: E0224 00:05:43.969130 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.001864 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.001927 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.001965 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002007 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002040 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002153 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002213 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002255 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002290 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002351 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002384 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002433 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002493 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002517 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.002556 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.023655 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.025497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.025555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.025574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.025615 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:44 crc kubenswrapper[4756]: E0224 00:05:44.026439 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104433 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104504 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104543 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104649 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104687 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104690 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104764 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104706 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104771 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104814 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104719 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104900 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104948 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104970 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105008 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105030 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105050 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105098 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105121 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.104776 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105142 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105169 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105196 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105203 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105236 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105120 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105291 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105297 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105230 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.105503 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.227006 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.228755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.228812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.228827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.228863 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:44 crc kubenswrapper[4756]: E0224 00:05:44.229419 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.267440 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.282910 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.314374 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: W0224 00:05:44.326406 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b9585727484b085294f93cf5ae2ad92e8e895f91f3224c8f541d0ee9289c4d9c WatchSource:0}: Error finding container b9585727484b085294f93cf5ae2ad92e8e895f91f3224c8f541d0ee9289c4d9c: Status 404 returned error can't find the container with id b9585727484b085294f93cf5ae2ad92e8e895f91f3224c8f541d0ee9289c4d9c Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.328735 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: W0224 00:05:44.332607 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-168252b547d45dd1a1dba55cb23d0b65b8107a6507e836f11b69f39c56f5ab3e WatchSource:0}: Error finding container 168252b547d45dd1a1dba55cb23d0b65b8107a6507e836f11b69f39c56f5ab3e: Status 404 returned error can't find the container with id 168252b547d45dd1a1dba55cb23d0b65b8107a6507e836f11b69f39c56f5ab3e Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.339421 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:44 crc kubenswrapper[4756]: W0224 00:05:44.350388 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-4b847946b84456b8d09c79a881fbb1cd1fa3da30c0bd597cb184a158bf8005be WatchSource:0}: Error finding container 4b847946b84456b8d09c79a881fbb1cd1fa3da30c0bd597cb184a158bf8005be: Status 404 returned error can't find the container with id 4b847946b84456b8d09c79a881fbb1cd1fa3da30c0bd597cb184a158bf8005be Feb 24 00:05:44 crc kubenswrapper[4756]: W0224 00:05:44.352999 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-02f03e945889fe384c11f1c686af4e03ec38e37b008154910ffdd0fd59c146e6 WatchSource:0}: Error finding container 02f03e945889fe384c11f1c686af4e03ec38e37b008154910ffdd0fd59c146e6: Status 404 returned error can't find the container with id 02f03e945889fe384c11f1c686af4e03ec38e37b008154910ffdd0fd59c146e6 Feb 24 00:05:44 crc kubenswrapper[4756]: E0224 00:05:44.369845 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Feb 24 00:05:44 crc kubenswrapper[4756]: W0224 00:05:44.370238 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c4528b5f1df74f5ace9608fd40d4c08cd21e5ccefed82e00ffc7b92fc4130a59 WatchSource:0}: Error finding container c4528b5f1df74f5ace9608fd40d4c08cd21e5ccefed82e00ffc7b92fc4130a59: Status 404 returned error can't find the container with id c4528b5f1df74f5ace9608fd40d4c08cd21e5ccefed82e00ffc7b92fc4130a59 Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.630138 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.631824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.631873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.631888 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.631919 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:44 crc kubenswrapper[4756]: E0224 00:05:44.632482 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.755123 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.766379 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:24:58.65636936 +0000 UTC Feb 24 00:05:44 crc kubenswrapper[4756]: W0224 00:05:44.817820 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:44 crc kubenswrapper[4756]: E0224 00:05:44.817944 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.837983 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"168252b547d45dd1a1dba55cb23d0b65b8107a6507e836f11b69f39c56f5ab3e"} Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.839429 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c4528b5f1df74f5ace9608fd40d4c08cd21e5ccefed82e00ffc7b92fc4130a59"} Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.840663 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02f03e945889fe384c11f1c686af4e03ec38e37b008154910ffdd0fd59c146e6"} Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.842295 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4b847946b84456b8d09c79a881fbb1cd1fa3da30c0bd597cb184a158bf8005be"} Feb 24 00:05:44 crc kubenswrapper[4756]: I0224 00:05:44.844028 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b9585727484b085294f93cf5ae2ad92e8e895f91f3224c8f541d0ee9289c4d9c"} Feb 24 00:05:44 crc kubenswrapper[4756]: W0224 00:05:44.862378 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:44 crc kubenswrapper[4756]: E0224 00:05:44.862561 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:44 crc kubenswrapper[4756]: W0224 00:05:44.897536 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:44 crc kubenswrapper[4756]: E0224 00:05:44.897629 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:45 crc kubenswrapper[4756]: W0224 00:05:45.027583 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:45 crc kubenswrapper[4756]: E0224 00:05:45.027694 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:45 crc kubenswrapper[4756]: E0224 00:05:45.171582 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.433276 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.435433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.435502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.435524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.435609 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:45 crc kubenswrapper[4756]: E0224 00:05:45.436644 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.744959 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:05:45 crc kubenswrapper[4756]: E0224 00:05:45.746379 4756 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.755172 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.766707 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:41:24.04350458 +0000 UTC Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.850795 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7"} Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.850846 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7"} Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.850859 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7"} Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.850869 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f"} Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.850908 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.852098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.852129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.852143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.852727 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245" exitCode=0 Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.852803 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245"} Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.852898 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.854359 4756 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587" exitCode=0 Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.854441 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.854443 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587"} Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.854459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.854483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.854496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.854970 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.855004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.855014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.856777 4756 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3" exitCode=0 Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.856918 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3"} Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.856936 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.859459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.859694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.859711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.861117 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5" exitCode=0 Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.861224 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.861165 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5"} Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.863040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.863081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.863093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.864457 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.865555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.865586 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:45 crc kubenswrapper[4756]: I0224 00:05:45.865599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:46 crc kubenswrapper[4756]: W0224 00:05:46.710940 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:46 crc kubenswrapper[4756]: E0224 00:05:46.711058 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.754827 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.767347 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:38:46.995339529 +0000 UTC Feb 24 00:05:46 crc kubenswrapper[4756]: E0224 00:05:46.773003 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.868090 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f" exitCode=0 Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.868188 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.868257 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.870699 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.870735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.870746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.876877 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.876941 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.877830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.877857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.877867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.879621 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.879654 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.879666 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.879729 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.880268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.880283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.880290 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.882751 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.882830 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.883220 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.883237 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.883250 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e"} Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.883487 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.883511 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:46 crc kubenswrapper[4756]: I0224 00:05:46.883521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:47 crc kubenswrapper[4756]: W0224 00:05:47.024610 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:47 crc kubenswrapper[4756]: E0224 00:05:47.024726 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:47 crc kubenswrapper[4756]: W0224 00:05:47.033917 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Feb 24 00:05:47 crc kubenswrapper[4756]: E0224 00:05:47.033977 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.037582 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.040326 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.040364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.040377 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.040403 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:47 crc kubenswrapper[4756]: E0224 00:05:47.040765 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.767994 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:12:13.576678835 +0000 UTC Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.887027 4756 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af" exitCode=0 Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.887343 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af"} Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.887503 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.888311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.888421 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.888477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.891801 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.892110 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.891882 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.891921 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.891832 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b638f133a35896cc4091deac6e0dca63fd62ae5ccdeeb6f25e9a9de4022a0fbb"} Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.893201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.893220 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.893232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.893370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.893456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.893599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.893999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.894038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:47 crc kubenswrapper[4756]: I0224 00:05:47.894054 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.769015 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:37:46.329471549 +0000 UTC Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.899450 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.899481 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.899498 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900017 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21"} Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900098 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b"} Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900117 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d"} Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900126 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb"} Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900139 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd"} Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900722 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900664 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:48 crc kubenswrapper[4756]: I0224 00:05:48.900814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.770029 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:39:26.841296336 +0000 UTC Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.901739 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.902795 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.902846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.902861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.982788 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.983028 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.984247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.984306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:49 crc kubenswrapper[4756]: I0224 00:05:49.984323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.059969 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.060169 4756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.060222 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.061459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.061494 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.061509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.080137 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.241875 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.243879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.243941 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.243960 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.243995 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.658212 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.658521 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.660262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.660313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.660330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.770652 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:43:16.263695187 +0000 UTC Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.971824 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.972050 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.973958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.974010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:50 crc kubenswrapper[4756]: I0224 00:05:50.974032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.043942 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.054358 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.466707 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.467113 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.469618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.469694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.469714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.496738 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.497146 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.499057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.499155 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.499174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.629485 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.771634 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 22:13:13.166528663 +0000 UTC Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.907998 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.908127 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.909791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.909834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.909854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.910823 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.910912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:51 crc kubenswrapper[4756]: I0224 00:05:51.910937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.772562 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 22:24:54.157238567 +0000 UTC Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.825016 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.825312 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.826685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.826724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.826735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.911344 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.912857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.912912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.912923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.983325 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:05:52 crc kubenswrapper[4756]: I0224 00:05:52.983421 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 24 00:05:53 crc kubenswrapper[4756]: I0224 00:05:53.773665 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 02:12:28.905952587 +0000 UTC Feb 24 00:05:53 crc kubenswrapper[4756]: E0224 00:05:53.939429 4756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:05:54 crc kubenswrapper[4756]: I0224 00:05:54.774129 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:38:20.034820125 +0000 UTC Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.631846 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.632036 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.633529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.633669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.633695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.638985 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.775009 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:06:51.343611491 +0000 UTC Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.920052 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.921347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.921406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:55 crc kubenswrapper[4756]: I0224 00:05:55.921429 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:56 crc kubenswrapper[4756]: I0224 00:05:56.775635 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 05:49:28.917588503 +0000 UTC Feb 24 00:05:57 crc kubenswrapper[4756]: W0224 00:05:57.746088 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z Feb 24 00:05:57 crc kubenswrapper[4756]: E0224 00:05:57.746213 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.748787 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z Feb 24 00:05:57 crc kubenswrapper[4756]: W0224 00:05:57.749355 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z Feb 24 00:05:57 crc kubenswrapper[4756]: E0224 00:05:57.749406 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:57 crc kubenswrapper[4756]: W0224 00:05:57.749593 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z Feb 24 00:05:57 crc kubenswrapper[4756]: E0224 00:05:57.749645 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.754112 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.754181 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 24 00:05:57 crc kubenswrapper[4756]: E0224 00:05:57.757392 4756 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:57 crc kubenswrapper[4756]: E0224 00:05:57.757895 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.759166 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z Feb 24 00:05:57 crc kubenswrapper[4756]: E0224 00:05:57.760958 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:05:57 crc kubenswrapper[4756]: W0224 00:05:57.762788 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z Feb 24 00:05:57 crc kubenswrapper[4756]: E0224 00:05:57.762864 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:05:57 crc kubenswrapper[4756]: E0224 00:05:57.767458 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:57Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189705f89dbe3053 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:43.750094931 +0000 UTC m=+0.660957584,LastTimestamp:2026-02-24 00:05:43.750094931 +0000 UTC m=+0.660957584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.776289 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:24:50.72804635 +0000 UTC Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.833928 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]log ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]etcd ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/generic-apiserver-start-informers ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/priority-and-fairness-filter ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-apiextensions-informers ok Feb 24 00:05:57 crc kubenswrapper[4756]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 24 00:05:57 crc kubenswrapper[4756]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-system-namespaces-controller ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 24 00:05:57 crc kubenswrapper[4756]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 24 00:05:57 crc kubenswrapper[4756]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 24 00:05:57 crc kubenswrapper[4756]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 24 00:05:57 crc kubenswrapper[4756]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/start-kube-aggregator-informers ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 24 00:05:57 crc kubenswrapper[4756]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 24 00:05:57 crc kubenswrapper[4756]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]autoregister-completion ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/apiservice-openapi-controller ok Feb 24 00:05:57 crc kubenswrapper[4756]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 24 00:05:57 crc kubenswrapper[4756]: livez check failed Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.834028 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.926260 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.927614 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b638f133a35896cc4091deac6e0dca63fd62ae5ccdeeb6f25e9a9de4022a0fbb" exitCode=255 Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.927660 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b638f133a35896cc4091deac6e0dca63fd62ae5ccdeeb6f25e9a9de4022a0fbb"} Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.927837 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.928642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.928693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.928705 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:57 crc kubenswrapper[4756]: I0224 00:05:57.929416 4756 scope.go:117] "RemoveContainer" containerID="b638f133a35896cc4091deac6e0dca63fd62ae5ccdeeb6f25e9a9de4022a0fbb" Feb 24 00:05:58 crc kubenswrapper[4756]: I0224 00:05:58.769974 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:58Z is after 2026-02-23T05:33:13Z Feb 24 00:05:58 crc kubenswrapper[4756]: I0224 00:05:58.777409 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:29:49.07328309 +0000 UTC Feb 24 00:05:58 crc kubenswrapper[4756]: I0224 00:05:58.932422 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 00:05:58 crc kubenswrapper[4756]: I0224 00:05:58.934224 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f"} Feb 24 00:05:58 crc kubenswrapper[4756]: I0224 00:05:58.934396 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:58 crc kubenswrapper[4756]: I0224 00:05:58.935441 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:58 crc kubenswrapper[4756]: I0224 00:05:58.935502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:58 crc kubenswrapper[4756]: I0224 00:05:58.935524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.759612 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:05:59Z is after 2026-02-23T05:33:13Z Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.777515 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:19:28.84826495 +0000 UTC Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.938990 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.939889 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.942645 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f" exitCode=255 Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.942721 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f"} Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.942808 4756 scope.go:117] "RemoveContainer" containerID="b638f133a35896cc4091deac6e0dca63fd62ae5ccdeeb6f25e9a9de4022a0fbb" Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.943031 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.944751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.944807 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.944827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:05:59 crc kubenswrapper[4756]: I0224 00:05:59.945815 4756 scope.go:117] "RemoveContainer" containerID="ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f" Feb 24 00:05:59 crc kubenswrapper[4756]: E0224 00:05:59.946259 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:00 crc kubenswrapper[4756]: I0224 00:06:00.760341 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:00Z is after 2026-02-23T05:33:13Z Feb 24 00:06:00 crc kubenswrapper[4756]: I0224 00:06:00.778427 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:57:00.486671361 +0000 UTC Feb 24 00:06:00 crc kubenswrapper[4756]: I0224 00:06:00.949237 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.497472 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.497749 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.499448 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.499515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.499535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.500695 4756 scope.go:117] "RemoveContainer" containerID="ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f" Feb 24 00:06:01 crc kubenswrapper[4756]: E0224 00:06:01.500987 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.671297 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.671582 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.673588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.673639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.673659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.692180 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.759524 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:01Z is after 2026-02-23T05:33:13Z Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.779345 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:48:39.643947444 +0000 UTC Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.957006 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.958704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.958769 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:01 crc kubenswrapper[4756]: I0224 00:06:01.958817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:02 crc kubenswrapper[4756]: W0224 00:06:02.173648 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:02Z is after 2026-02-23T05:33:13Z Feb 24 00:06:02 crc kubenswrapper[4756]: E0224 00:06:02.173763 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.760739 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:02Z is after 2026-02-23T05:33:13Z Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.780391 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:47:15.753258501 +0000 UTC Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.831356 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.831616 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.833305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.833341 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.833355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.833929 4756 scope.go:117] "RemoveContainer" containerID="ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f" Feb 24 00:06:02 crc kubenswrapper[4756]: E0224 00:06:02.834169 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.836383 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.959491 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.960431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.960497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.960516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.961419 4756 scope.go:117] "RemoveContainer" containerID="ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f" Feb 24 00:06:02 crc kubenswrapper[4756]: E0224 00:06:02.961650 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.982826 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:02 crc kubenswrapper[4756]: I0224 00:06:02.983303 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:03 crc kubenswrapper[4756]: I0224 00:06:03.758152 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:03Z is after 2026-02-23T05:33:13Z Feb 24 00:06:03 crc kubenswrapper[4756]: I0224 00:06:03.781209 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:30:07.972468914 +0000 UTC Feb 24 00:06:03 crc kubenswrapper[4756]: E0224 00:06:03.940411 4756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:06:04 crc kubenswrapper[4756]: I0224 00:06:04.162208 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:04 crc kubenswrapper[4756]: E0224 00:06:04.162452 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:06:04 crc kubenswrapper[4756]: I0224 00:06:04.164215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:04 crc kubenswrapper[4756]: I0224 00:06:04.164279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:04 crc kubenswrapper[4756]: I0224 00:06:04.164295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:04 crc kubenswrapper[4756]: I0224 00:06:04.164342 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:04 crc kubenswrapper[4756]: E0224 00:06:04.169617 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:06:04 crc kubenswrapper[4756]: W0224 00:06:04.377920 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z Feb 24 00:06:04 crc kubenswrapper[4756]: E0224 00:06:04.378048 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:04 crc kubenswrapper[4756]: I0224 00:06:04.759715 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:04Z is after 2026-02-23T05:33:13Z Feb 24 00:06:04 crc kubenswrapper[4756]: I0224 00:06:04.781681 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:18:03.551599349 +0000 UTC Feb 24 00:06:05 crc kubenswrapper[4756]: I0224 00:06:05.632736 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:05 crc kubenswrapper[4756]: I0224 00:06:05.633528 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:05 crc kubenswrapper[4756]: I0224 00:06:05.635730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:05 crc kubenswrapper[4756]: I0224 00:06:05.635789 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:05 crc kubenswrapper[4756]: I0224 00:06:05.635800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:05 crc kubenswrapper[4756]: I0224 00:06:05.636570 4756 scope.go:117] "RemoveContainer" containerID="ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f" Feb 24 00:06:05 crc kubenswrapper[4756]: E0224 00:06:05.636764 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:05 crc kubenswrapper[4756]: I0224 00:06:05.759907 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:05Z is after 2026-02-23T05:33:13Z Feb 24 00:06:05 crc kubenswrapper[4756]: I0224 00:06:05.782749 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 10:24:05.660003984 +0000 UTC Feb 24 00:06:06 crc kubenswrapper[4756]: I0224 00:06:06.032356 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:06:06 crc kubenswrapper[4756]: E0224 00:06:06.037987 4756 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:06 crc kubenswrapper[4756]: W0224 00:06:06.040396 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:06Z is after 2026-02-23T05:33:13Z Feb 24 00:06:06 crc kubenswrapper[4756]: E0224 00:06:06.040475 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:06 crc kubenswrapper[4756]: I0224 00:06:06.759687 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:06Z is after 2026-02-23T05:33:13Z Feb 24 00:06:06 crc kubenswrapper[4756]: I0224 00:06:06.783453 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 16:27:59.97250987 +0000 UTC Feb 24 00:06:07 crc kubenswrapper[4756]: I0224 00:06:07.759400 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:07Z is after 2026-02-23T05:33:13Z Feb 24 00:06:07 crc kubenswrapper[4756]: E0224 00:06:07.773986 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:07Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189705f89dbe3053 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:43.750094931 +0000 UTC m=+0.660957584,LastTimestamp:2026-02-24 00:05:43.750094931 +0000 UTC m=+0.660957584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:06:07 crc kubenswrapper[4756]: I0224 00:06:07.784395 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 00:16:54.296266222 +0000 UTC Feb 24 00:06:08 crc kubenswrapper[4756]: I0224 00:06:08.760359 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:08Z is after 2026-02-23T05:33:13Z Feb 24 00:06:08 crc kubenswrapper[4756]: I0224 00:06:08.784516 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 15:15:02.122779771 +0000 UTC Feb 24 00:06:09 crc kubenswrapper[4756]: I0224 00:06:09.757271 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:09Z is after 2026-02-23T05:33:13Z Feb 24 00:06:09 crc kubenswrapper[4756]: I0224 00:06:09.784823 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:52:49.265281652 +0000 UTC Feb 24 00:06:09 crc kubenswrapper[4756]: W0224 00:06:09.859385 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:09Z is after 2026-02-23T05:33:13Z Feb 24 00:06:09 crc kubenswrapper[4756]: E0224 00:06:09.859510 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:09Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:10 crc kubenswrapper[4756]: I0224 00:06:10.759881 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:10Z is after 2026-02-23T05:33:13Z Feb 24 00:06:10 crc kubenswrapper[4756]: I0224 00:06:10.785742 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:50:58.431618215 +0000 UTC Feb 24 00:06:11 crc kubenswrapper[4756]: E0224 00:06:11.167333 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:06:11 crc kubenswrapper[4756]: I0224 00:06:11.170483 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:11 crc kubenswrapper[4756]: I0224 00:06:11.171880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:11 crc kubenswrapper[4756]: I0224 00:06:11.171913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:11 crc kubenswrapper[4756]: I0224 00:06:11.171924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:11 crc kubenswrapper[4756]: I0224 00:06:11.171950 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:11 crc kubenswrapper[4756]: E0224 00:06:11.174830 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:06:11 crc kubenswrapper[4756]: W0224 00:06:11.568292 4756 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z Feb 24 00:06:11 crc kubenswrapper[4756]: E0224 00:06:11.568416 4756 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 24 00:06:11 crc kubenswrapper[4756]: I0224 00:06:11.758198 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:11Z is after 2026-02-23T05:33:13Z Feb 24 00:06:11 crc kubenswrapper[4756]: I0224 00:06:11.786595 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:13:34.659155866 +0000 UTC Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.759650 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.787429 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:53:00.212869198 +0000 UTC Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.984384 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.984491 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.984567 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.984827 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.986512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.986588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.986609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.987484 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 24 00:06:12 crc kubenswrapper[4756]: I0224 00:06:12.987786 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7" gracePeriod=30 Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.757217 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:13Z is after 2026-02-23T05:33:13Z Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.787852 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:14:37.834628227 +0000 UTC Feb 24 00:06:13 crc kubenswrapper[4756]: E0224 00:06:13.940629 4756 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.994397 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.995199 4756 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7" exitCode=255 Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.995263 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7"} Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.995304 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26"} Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.995445 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.997534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.997585 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:13 crc kubenswrapper[4756]: I0224 00:06:13.997604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:14 crc kubenswrapper[4756]: I0224 00:06:14.759022 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:14Z is after 2026-02-23T05:33:13Z Feb 24 00:06:14 crc kubenswrapper[4756]: I0224 00:06:14.788617 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:36:44.438091455 +0000 UTC Feb 24 00:06:15 crc kubenswrapper[4756]: I0224 00:06:15.631954 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:15 crc kubenswrapper[4756]: I0224 00:06:15.632255 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:15 crc kubenswrapper[4756]: I0224 00:06:15.634516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:15 crc kubenswrapper[4756]: I0224 00:06:15.634576 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:15 crc kubenswrapper[4756]: I0224 00:06:15.634596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:15 crc kubenswrapper[4756]: I0224 00:06:15.760030 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:15Z is after 2026-02-23T05:33:13Z Feb 24 00:06:15 crc kubenswrapper[4756]: I0224 00:06:15.789760 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:34:24.286444549 +0000 UTC Feb 24 00:06:16 crc kubenswrapper[4756]: I0224 00:06:16.758725 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:16Z is after 2026-02-23T05:33:13Z Feb 24 00:06:16 crc kubenswrapper[4756]: I0224 00:06:16.790905 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 05:03:05.148289355 +0000 UTC Feb 24 00:06:16 crc kubenswrapper[4756]: I0224 00:06:16.832392 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:16 crc kubenswrapper[4756]: I0224 00:06:16.834450 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:16 crc kubenswrapper[4756]: I0224 00:06:16.834669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:16 crc kubenswrapper[4756]: I0224 00:06:16.834860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:16 crc kubenswrapper[4756]: I0224 00:06:16.836027 4756 scope.go:117] "RemoveContainer" containerID="ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f" Feb 24 00:06:17 crc kubenswrapper[4756]: I0224 00:06:17.760992 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:13Z Feb 24 00:06:17 crc kubenswrapper[4756]: E0224 00:06:17.777783 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189705f89dbe3053 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:05:43.750094931 +0000 UTC m=+0.660957584,LastTimestamp:2026-02-24 00:05:43.750094931 +0000 UTC m=+0.660957584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:06:17 crc kubenswrapper[4756]: I0224 00:06:17.791347 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:02:23.029186604 +0000 UTC Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.010221 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.012089 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c"} Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.012294 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.013138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.013179 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.013191 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:18 crc kubenswrapper[4756]: E0224 00:06:18.172938 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.174970 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.177535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.177572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.177597 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.177628 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:18 crc kubenswrapper[4756]: E0224 00:06:18.181146 4756 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:13Z" node="crc" Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.759183 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:13Z Feb 24 00:06:18 crc kubenswrapper[4756]: I0224 00:06:18.792463 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:49:57.134869348 +0000 UTC Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.017483 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.018263 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.021434 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c" exitCode=255 Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.021502 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c"} Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.021568 4756 scope.go:117] "RemoveContainer" containerID="ae83d16e1a39ead351afc3482f21a0687d339631b540520c9bc766a1eb4d535f" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.021797 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.023212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.023278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.023302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.024200 4756 scope.go:117] "RemoveContainer" containerID="7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c" Feb 24 00:06:19 crc kubenswrapper[4756]: E0224 00:06:19.024547 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.757772 4756 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:19Z is after 2026-02-23T05:33:13Z Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.793434 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 19:12:12.618540652 +0000 UTC Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.840563 4756 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.983458 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.983643 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.984963 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.984995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:19 crc kubenswrapper[4756]: I0224 00:06:19.985009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.026425 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.309330 4756 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.760820 4756 apiserver.go:52] "Watching apiserver" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.767485 4756 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.768044 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.768716 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.768880 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.769166 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.769464 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.769496 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.770196 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.770257 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.770982 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.771098 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.776290 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.776479 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.776641 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.776738 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.776872 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.776945 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.776978 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.777187 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.779631 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.793968 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 04:36:15.056683463 +0000 UTC Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.824626 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.841291 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.854323 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.864840 4756 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.866042 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.876718 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.890432 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.909454 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947395 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947496 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947542 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947578 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947615 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947648 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947678 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947709 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947742 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947776 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947813 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947851 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.947941 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948049 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948348 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948353 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948496 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948630 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948683 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948864 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948942 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.948981 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949182 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949201 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949179 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949282 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949377 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949385 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949607 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949652 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.949763 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:06:21.449736473 +0000 UTC m=+38.360599106 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949788 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949805 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949841 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949863 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949880 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949901 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949941 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949971 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.949989 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950012 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950029 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950047 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950090 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950117 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950137 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950158 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950179 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950227 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950262 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950281 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950302 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950321 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950340 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950362 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950380 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950398 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950416 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950422 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950434 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950501 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950547 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950572 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950577 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950666 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950587 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950745 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950782 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950822 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950854 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950858 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950887 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950926 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.950970 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951004 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951007 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951036 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951039 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951039 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951111 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951152 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951232 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951268 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951303 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951333 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951364 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951395 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951428 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951458 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951490 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951526 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951564 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951598 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951636 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951672 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951704 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951740 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951773 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951809 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951848 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951883 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951921 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951956 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952010 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952042 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952098 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952127 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952162 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952195 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952225 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952260 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952291 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952328 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952362 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952393 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952425 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952461 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952494 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952522 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952558 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952592 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952622 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952651 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952682 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952713 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952742 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952773 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952804 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952835 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952869 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952903 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952932 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953585 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953639 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953676 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953710 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953745 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953782 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953822 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953856 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953893 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953974 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954008 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954043 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954104 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954149 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954186 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954218 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954261 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954303 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954344 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954392 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954435 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954481 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954522 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954558 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954593 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954636 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954673 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954715 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954754 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954793 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954836 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954875 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954920 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954960 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954996 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955036 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955093 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955132 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955169 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955206 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955246 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955282 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955324 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955363 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955413 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955490 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955537 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955572 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955617 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955653 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955696 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955737 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955777 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955818 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955850 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955884 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955924 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955966 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956002 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956050 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956124 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956171 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956212 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956247 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956290 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956337 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956383 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956421 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956460 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956599 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956645 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956681 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956723 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951113 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951150 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951451 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951463 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951488 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.951862 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952360 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957371 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952838 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.952971 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953309 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953343 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953415 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953550 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.953588 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.954246 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955441 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955621 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.955849 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956313 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956710 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956784 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957423 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956803 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957659 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.956769 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957764 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957768 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957820 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957844 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957893 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957945 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957866 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958112 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958183 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958252 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958304 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958349 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958393 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958434 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958471 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958510 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958608 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958681 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958725 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958766 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958802 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958847 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958892 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958935 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958987 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959042 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959132 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959178 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959243 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959303 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957970 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.957978 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958255 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958412 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.960015 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958496 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.958669 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959004 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959158 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959163 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959359 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959815 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.960372 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.960429 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.960807 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.961095 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.961134 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.960295 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.961448 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.961501 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.961955 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.962565 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.962935 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963142 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963390 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.959453 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963449 4756 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963468 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963482 4756 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963498 4756 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963716 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963747 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963764 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963786 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963803 4756 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963818 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963831 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963902 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963919 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963931 4756 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963942 4756 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963953 4756 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963963 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963974 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963984 4756 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963995 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964009 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964019 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964031 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964042 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964053 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964090 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964105 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964119 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964141 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964156 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964170 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964182 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964196 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964209 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964224 4756 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964237 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964250 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964261 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964271 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964284 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964295 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964305 4756 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964316 4756 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964327 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964337 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964347 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964357 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964368 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964379 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964390 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963639 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.963964 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964394 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964699 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.964743 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.969477 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.965121 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.965414 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.965425 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.965431 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.965451 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.969554 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.965519 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.965754 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.965982 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966039 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966182 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966195 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966181 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966166 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966407 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966647 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966709 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966743 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966750 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966775 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.966807 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.967112 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.967163 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.970143 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.967282 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.967457 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.967459 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.967558 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.968137 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.968406 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.968590 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.968649 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.968678 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.968749 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.969134 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.969440 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.970452 4756 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.971505 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.971924 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.972266 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.972397 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:21.472362771 +0000 UTC m=+38.383225604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.973228 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.973354 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.973548 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.973622 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:21.473604817 +0000 UTC m=+38.384467480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.974945 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.976439 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.976786 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.977358 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.979353 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.979701 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.980246 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.980711 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.981046 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.981208 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.982100 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.982437 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.982664 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.983016 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.988718 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.991090 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.993089 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.993188 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.993672 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.993726 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.994118 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.994172 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.994194 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.994203 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.994221 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.994155 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: E0224 00:06:20.994302 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:21.494270728 +0000 UTC m=+38.405133571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.995690 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.994270 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.995031 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.995649 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.995821 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.996140 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.996650 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.996724 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.997470 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.998118 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:20 crc kubenswrapper[4756]: I0224 00:06:20.998526 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.001248 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.001425 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.001596 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.001607 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.001723 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.001968 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002057 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002354 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002421 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002392 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002396 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002512 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002356 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002661 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002821 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.002971 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.004764 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.005244 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.005278 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.005296 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.005379 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:21.505352741 +0000 UTC m=+38.416215374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.005575 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.005691 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.005946 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.006105 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.006272 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.006408 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.006425 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.007235 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.007345 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.007442 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.007907 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.008402 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.008643 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.008678 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.011431 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.011805 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.012220 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.012620 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.012915 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.013108 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.013239 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.016328 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.016483 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.016823 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.018548 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.023808 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.026236 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.026494 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.034561 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065519 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065621 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065690 4756 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065707 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065722 4756 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065738 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065760 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065775 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065789 4756 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065803 4756 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065816 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065829 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065841 4756 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065871 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065885 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065876 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065949 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.065898 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066010 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066030 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066051 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066109 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066127 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066144 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066159 4756 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066174 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066190 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066204 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066221 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066236 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066250 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066265 4756 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066280 4756 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066297 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066312 4756 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066327 4756 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066356 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066371 4756 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066389 4756 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066407 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066423 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066437 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066455 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066469 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066483 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066497 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066511 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066525 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066539 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066553 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066568 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066583 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066598 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066615 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066630 4756 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066646 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066660 4756 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066676 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066690 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066706 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066722 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066738 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066753 4756 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066767 4756 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066787 4756 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066807 4756 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066822 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066838 4756 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066856 4756 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066873 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066890 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066908 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066925 4756 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066942 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066959 4756 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066974 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.066989 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067004 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067019 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067037 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067053 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067087 4756 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067106 4756 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067121 4756 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067136 4756 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067151 4756 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067167 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067182 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067198 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067213 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067230 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067245 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067260 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067276 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067292 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067307 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067328 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067346 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067361 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067376 4756 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067392 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067406 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067420 4756 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067437 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067453 4756 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067469 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067484 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067509 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067527 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067541 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067558 4756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067575 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067589 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067606 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067620 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067635 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067650 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067666 4756 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067682 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067698 4756 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067714 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067730 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067745 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067761 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067780 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067794 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067808 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067823 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067839 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067854 4756 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067869 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067883 4756 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067898 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067915 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067930 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067945 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067962 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067979 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.067996 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.068011 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.068026 4756 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.068041 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.068056 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.068090 4756 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.068106 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.068123 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.068142 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.088653 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.097694 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.105520 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 24 00:06:21 crc kubenswrapper[4756]: W0224 00:06:21.108786 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8f79b769b6ddb8cbc717b17a25e2389c9518eb1075465eea107ac72c5b73c90e WatchSource:0}: Error finding container 8f79b769b6ddb8cbc717b17a25e2389c9518eb1075465eea107ac72c5b73c90e: Status 404 returned error can't find the container with id 8f79b769b6ddb8cbc717b17a25e2389c9518eb1075465eea107ac72c5b73c90e Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.116633 4756 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:06:21 crc kubenswrapper[4756]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 00:06:21 crc kubenswrapper[4756]: set -o allexport Feb 24 00:06:21 crc kubenswrapper[4756]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 00:06:21 crc kubenswrapper[4756]: source /etc/kubernetes/apiserver-url.env Feb 24 00:06:21 crc kubenswrapper[4756]: else Feb 24 00:06:21 crc kubenswrapper[4756]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 00:06:21 crc kubenswrapper[4756]: exit 1 Feb 24 00:06:21 crc kubenswrapper[4756]: fi Feb 24 00:06:21 crc kubenswrapper[4756]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 00:06:21 crc kubenswrapper[4756]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:06:21 crc kubenswrapper[4756]: > logger="UnhandledError" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.118727 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.119514 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.120760 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 00:06:21 crc kubenswrapper[4756]: W0224 00:06:21.121725 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-4d40c648641228660e5f6411a4e716580e6f16e726b07b48c0ffd792705e5184 WatchSource:0}: Error finding container 4d40c648641228660e5f6411a4e716580e6f16e726b07b48c0ffd792705e5184: Status 404 returned error can't find the container with id 4d40c648641228660e5f6411a4e716580e6f16e726b07b48c0ffd792705e5184 Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.124204 4756 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:06:21 crc kubenswrapper[4756]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 00:06:21 crc kubenswrapper[4756]: if [[ -f "/env/_master" ]]; then Feb 24 00:06:21 crc kubenswrapper[4756]: set -o allexport Feb 24 00:06:21 crc kubenswrapper[4756]: source "/env/_master" Feb 24 00:06:21 crc kubenswrapper[4756]: set +o allexport Feb 24 00:06:21 crc kubenswrapper[4756]: fi Feb 24 00:06:21 crc kubenswrapper[4756]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 00:06:21 crc kubenswrapper[4756]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 00:06:21 crc kubenswrapper[4756]: ho_enable="--enable-hybrid-overlay" Feb 24 00:06:21 crc kubenswrapper[4756]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 00:06:21 crc kubenswrapper[4756]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 00:06:21 crc kubenswrapper[4756]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 00:06:21 crc kubenswrapper[4756]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 00:06:21 crc kubenswrapper[4756]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 00:06:21 crc kubenswrapper[4756]: --webhook-host=127.0.0.1 \ Feb 24 00:06:21 crc kubenswrapper[4756]: --webhook-port=9743 \ Feb 24 00:06:21 crc kubenswrapper[4756]: ${ho_enable} \ Feb 24 00:06:21 crc kubenswrapper[4756]: --enable-interconnect \ Feb 24 00:06:21 crc kubenswrapper[4756]: --disable-approver \ Feb 24 00:06:21 crc kubenswrapper[4756]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 00:06:21 crc kubenswrapper[4756]: --wait-for-kubernetes-api=200s \ Feb 24 00:06:21 crc kubenswrapper[4756]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 00:06:21 crc kubenswrapper[4756]: --loglevel="${LOGLEVEL}" Feb 24 00:06:21 crc kubenswrapper[4756]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:06:21 crc kubenswrapper[4756]: > logger="UnhandledError" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.126858 4756 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:06:21 crc kubenswrapper[4756]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 00:06:21 crc kubenswrapper[4756]: if [[ -f "/env/_master" ]]; then Feb 24 00:06:21 crc kubenswrapper[4756]: set -o allexport Feb 24 00:06:21 crc kubenswrapper[4756]: source "/env/_master" Feb 24 00:06:21 crc kubenswrapper[4756]: set +o allexport Feb 24 00:06:21 crc kubenswrapper[4756]: fi Feb 24 00:06:21 crc kubenswrapper[4756]: Feb 24 00:06:21 crc kubenswrapper[4756]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 00:06:21 crc kubenswrapper[4756]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 00:06:21 crc kubenswrapper[4756]: --disable-webhook \ Feb 24 00:06:21 crc kubenswrapper[4756]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 00:06:21 crc kubenswrapper[4756]: --loglevel="${LOGLEVEL}" Feb 24 00:06:21 crc kubenswrapper[4756]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:06:21 crc kubenswrapper[4756]: > logger="UnhandledError" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.128251 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.472665 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.472785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.473007 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:06:22.472959315 +0000 UTC m=+39.383821988 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.473011 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.473183 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:22.473167061 +0000 UTC m=+39.384029734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.497822 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.514467 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.519461 4756 scope.go:117] "RemoveContainer" containerID="7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.519742 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.525583 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.549217 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.560735 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.568490 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.574188 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.574273 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.574298 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574423 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574487 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:22.574468478 +0000 UTC m=+39.485331111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574574 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574590 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574603 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574629 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:22.574621702 +0000 UTC m=+39.485484325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574680 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574691 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574699 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:21 crc kubenswrapper[4756]: E0224 00:06:21.574719 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:22.574713195 +0000 UTC m=+39.485575828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.578261 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.588094 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.794811 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:22:15.53303655 +0000 UTC Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.837557 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.838160 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.839447 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.840087 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.841099 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.841595 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.842206 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.843168 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.843757 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.844666 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.845161 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.846224 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.846707 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.847332 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.848261 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.848761 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.849703 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.850126 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.850691 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.851773 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.852279 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.853415 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.853984 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.855258 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.855835 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.856634 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.857935 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.858445 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.859480 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.860014 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.860878 4756 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.860995 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.862868 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.863905 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.864401 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.865907 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.866591 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.867841 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.868794 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.869587 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.870093 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.870752 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.871581 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.872226 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.872688 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.873424 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.873971 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.874724 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.875243 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.875705 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.878152 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.879289 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.880386 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 24 00:06:21 crc kubenswrapper[4756]: I0224 00:06:21.882026 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.037487 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8f79b769b6ddb8cbc717b17a25e2389c9518eb1075465eea107ac72c5b73c90e"} Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.039535 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4d40c648641228660e5f6411a4e716580e6f16e726b07b48c0ffd792705e5184"} Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.039982 4756 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:06:22 crc kubenswrapper[4756]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 24 00:06:22 crc kubenswrapper[4756]: set -o allexport Feb 24 00:06:22 crc kubenswrapper[4756]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 24 00:06:22 crc kubenswrapper[4756]: source /etc/kubernetes/apiserver-url.env Feb 24 00:06:22 crc kubenswrapper[4756]: else Feb 24 00:06:22 crc kubenswrapper[4756]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 24 00:06:22 crc kubenswrapper[4756]: exit 1 Feb 24 00:06:22 crc kubenswrapper[4756]: fi Feb 24 00:06:22 crc kubenswrapper[4756]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 24 00:06:22 crc kubenswrapper[4756]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:06:22 crc kubenswrapper[4756]: > logger="UnhandledError" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.040576 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3297b3caf35a33e90b5b9b6d2abd509b23e39cc44fd18058150aff15a4edd2fa"} Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.041205 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.041368 4756 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:06:22 crc kubenswrapper[4756]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 00:06:22 crc kubenswrapper[4756]: if [[ -f "/env/_master" ]]; then Feb 24 00:06:22 crc kubenswrapper[4756]: set -o allexport Feb 24 00:06:22 crc kubenswrapper[4756]: source "/env/_master" Feb 24 00:06:22 crc kubenswrapper[4756]: set +o allexport Feb 24 00:06:22 crc kubenswrapper[4756]: fi Feb 24 00:06:22 crc kubenswrapper[4756]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 24 00:06:22 crc kubenswrapper[4756]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 24 00:06:22 crc kubenswrapper[4756]: ho_enable="--enable-hybrid-overlay" Feb 24 00:06:22 crc kubenswrapper[4756]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 24 00:06:22 crc kubenswrapper[4756]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 24 00:06:22 crc kubenswrapper[4756]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 24 00:06:22 crc kubenswrapper[4756]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 00:06:22 crc kubenswrapper[4756]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 24 00:06:22 crc kubenswrapper[4756]: --webhook-host=127.0.0.1 \ Feb 24 00:06:22 crc kubenswrapper[4756]: --webhook-port=9743 \ Feb 24 00:06:22 crc kubenswrapper[4756]: ${ho_enable} \ Feb 24 00:06:22 crc kubenswrapper[4756]: --enable-interconnect \ Feb 24 00:06:22 crc kubenswrapper[4756]: --disable-approver \ Feb 24 00:06:22 crc kubenswrapper[4756]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 24 00:06:22 crc kubenswrapper[4756]: --wait-for-kubernetes-api=200s \ Feb 24 00:06:22 crc kubenswrapper[4756]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 24 00:06:22 crc kubenswrapper[4756]: --loglevel="${LOGLEVEL}" Feb 24 00:06:22 crc kubenswrapper[4756]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:06:22 crc kubenswrapper[4756]: > logger="UnhandledError" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.041662 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.041941 4756 scope.go:117] "RemoveContainer" containerID="7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.042145 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.043127 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.044108 4756 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 24 00:06:22 crc kubenswrapper[4756]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 24 00:06:22 crc kubenswrapper[4756]: if [[ -f "/env/_master" ]]; then Feb 24 00:06:22 crc kubenswrapper[4756]: set -o allexport Feb 24 00:06:22 crc kubenswrapper[4756]: source "/env/_master" Feb 24 00:06:22 crc kubenswrapper[4756]: set +o allexport Feb 24 00:06:22 crc kubenswrapper[4756]: fi Feb 24 00:06:22 crc kubenswrapper[4756]: Feb 24 00:06:22 crc kubenswrapper[4756]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 24 00:06:22 crc kubenswrapper[4756]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 24 00:06:22 crc kubenswrapper[4756]: --disable-webhook \ Feb 24 00:06:22 crc kubenswrapper[4756]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 24 00:06:22 crc kubenswrapper[4756]: --loglevel="${LOGLEVEL}" Feb 24 00:06:22 crc kubenswrapper[4756]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 24 00:06:22 crc kubenswrapper[4756]: > logger="UnhandledError" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.045260 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.053273 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.063395 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.072228 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.087514 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.098276 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.112559 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.121879 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.135103 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.145671 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.156091 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.168361 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.177751 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.189158 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.200389 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.245680 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.258260 4756 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.485220 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.485343 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.485490 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.485565 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:24.485544743 +0000 UTC m=+41.396407386 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.485642 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:06:24.485632025 +0000 UTC m=+41.396494668 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.585984 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.586103 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.586160 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586347 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586371 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586387 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586453 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:24.586433238 +0000 UTC m=+41.497295881 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586517 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586528 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586537 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586572 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:24.586559031 +0000 UTC m=+41.497421674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586641 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.586683 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:24.586671475 +0000 UTC m=+41.497534118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.795765 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:32:06.629343681 +0000 UTC Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.832477 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.832637 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.832719 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.832777 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.832821 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:22 crc kubenswrapper[4756]: E0224 00:06:22.832874 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.984397 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:22 crc kubenswrapper[4756]: I0224 00:06:22.984485 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.025266 4756 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.795990 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 07:57:23.058072311 +0000 UTC Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.855333 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.874945 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.890805 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.906422 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.926176 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.948296 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:23 crc kubenswrapper[4756]: I0224 00:06:23.965837 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.516226 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.516420 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:06:28.516382155 +0000 UTC m=+45.427244818 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.516495 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.516682 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.516775 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:28.516751716 +0000 UTC m=+45.427614389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.617045 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.617167 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.617191 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.617303 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.617355 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:28.617341683 +0000 UTC m=+45.528204316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.617398 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.619265 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.617880 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.619438 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.619529 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.619764 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:28.619729912 +0000 UTC m=+45.530592545 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.619915 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.620081 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:28.620044211 +0000 UTC m=+45.530906844 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.796890 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:07:09.338902801 +0000 UTC Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.832929 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.833126 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.833182 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.833407 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:24 crc kubenswrapper[4756]: I0224 00:06:24.833539 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:24 crc kubenswrapper[4756]: E0224 00:06:24.833769 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.181403 4756 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.183638 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.183688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.183704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.183826 4756 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.195479 4756 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.195678 4756 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.197870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.197927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.197946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.197973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.197991 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: E0224 00:06:25.221726 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.227374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.227438 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.227453 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.227483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.227502 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: E0224 00:06:25.242746 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.247516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.247563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.247580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.247604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.247622 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: E0224 00:06:25.263888 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.270103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.270162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.270178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.270199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.270215 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: E0224 00:06:25.286287 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.292470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.292513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.292532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.292558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.292578 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: E0224 00:06:25.310929 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:25 crc kubenswrapper[4756]: E0224 00:06:25.311196 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.314247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.314301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.314314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.314334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.314348 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.418239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.418300 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.418316 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.418337 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.418349 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.522376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.522474 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.522502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.522549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.522603 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.625613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.625684 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.625702 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.625729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.625746 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.632251 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.633299 4756 scope.go:117] "RemoveContainer" containerID="7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c" Feb 24 00:06:25 crc kubenswrapper[4756]: E0224 00:06:25.633628 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.728821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.728893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.728912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.728941 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.728959 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.797635 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:40:50.054605296 +0000 UTC Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.831376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.831431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.831447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.831472 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.831487 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.934857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.934919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.934965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.934996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:25 crc kubenswrapper[4756]: I0224 00:06:25.935018 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:25Z","lastTransitionTime":"2026-02-24T00:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.038457 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.038547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.038567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.038596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.038620 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.142227 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.142297 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.142315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.142343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.142361 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.245388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.245454 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.245476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.245504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.245526 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.349261 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.349340 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.349366 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.349391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.349412 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.456767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.456834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.456853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.456879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.456897 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.560733 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.560803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.560826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.560852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.560872 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.663853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.663923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.663946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.663974 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.663999 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.767920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.767997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.768016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.768039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.768055 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.798516 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 09:30:23.885219387 +0000 UTC Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.833008 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.833008 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:26 crc kubenswrapper[4756]: E0224 00:06:26.833284 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.833016 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:26 crc kubenswrapper[4756]: E0224 00:06:26.833365 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:26 crc kubenswrapper[4756]: E0224 00:06:26.833475 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.870953 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.871017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.872016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.872058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.872086 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.975590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.975673 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.975698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.975731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:26 crc kubenswrapper[4756]: I0224 00:06:26.975754 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:26Z","lastTransitionTime":"2026-02-24T00:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.077999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.078095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.078113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.078135 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.078151 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.181216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.181283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.181305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.181347 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.181365 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.284600 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.284642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.284652 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.284669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.284683 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.387121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.387185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.387210 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.387232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.387249 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.490760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.490824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.490835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.490854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.490864 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.595155 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.595218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.595232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.595255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.595270 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.698420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.698465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.698476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.698495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.698720 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.799298 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:15:54.351405536 +0000 UTC Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.801638 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.801672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.801682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.801700 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.801713 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.904906 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.904957 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.904971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.904989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:27 crc kubenswrapper[4756]: I0224 00:06:27.905000 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:27Z","lastTransitionTime":"2026-02-24T00:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.008324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.008382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.008394 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.008418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.008433 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.111434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.111473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.111485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.111503 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.111514 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.214780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.214869 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.214890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.214919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.214939 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.319859 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.319919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.319932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.319954 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.319967 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.422857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.422908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.422922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.422946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.422966 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.527201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.527270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.527291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.527322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.527343 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.557225 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.557403 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.557459 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:06:36.55742262 +0000 UTC m=+53.468285293 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.557515 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.557569 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:36.557554574 +0000 UTC m=+53.468417227 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.631324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.631382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.631394 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.631415 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.631428 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.658507 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.658569 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.658592 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658729 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658794 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658826 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:36.658806569 +0000 UTC m=+53.569669202 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658835 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658856 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658795 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658934 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:36.658905852 +0000 UTC m=+53.569768675 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658943 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.658975 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.659099 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:36.659050256 +0000 UTC m=+53.569913089 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.734580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.734660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.734680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.734706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.734729 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.799468 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 16:15:11.357426034 +0000 UTC Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.833356 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.833384 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.833635 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.833380 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.833790 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:28 crc kubenswrapper[4756]: E0224 00:06:28.833966 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.837785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.837871 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.837897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.837932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.837959 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.941800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.941931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.941946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.941970 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:28 crc kubenswrapper[4756]: I0224 00:06:28.942008 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:28Z","lastTransitionTime":"2026-02-24T00:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.046934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.046979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.046992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.047015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.047027 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.151164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.151239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.151257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.151284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.151304 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.254440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.254520 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.254547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.254582 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.254604 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.358222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.358296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.358313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.358335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.358352 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.460905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.460988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.461010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.461042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.461130 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.564715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.564766 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.564777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.564797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.564810 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.667620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.667685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.667696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.667717 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.667731 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.771605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.771664 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.771677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.771701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.771715 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.800310 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 17:13:33.025123203 +0000 UTC Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.875143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.875192 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.875208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.875230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.875240 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.977988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.978040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.978052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.978088 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:29 crc kubenswrapper[4756]: I0224 00:06:29.978104 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:29Z","lastTransitionTime":"2026-02-24T00:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.080788 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.080843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.080855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.080873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.080888 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.183285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.183339 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.183352 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.183375 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.183387 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.285966 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.286021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.286034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.286053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.286089 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.359274 4756 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.388456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.388883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.389108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.389206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.389533 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.492319 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.492639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.492728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.492816 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.492903 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.596007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.596079 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.596093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.596119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.596133 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.699223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.699263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.699273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.699289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.699300 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.800586 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 15:17:39.602331184 +0000 UTC Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.802667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.802767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.802797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.802829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.802851 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.833003 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.833057 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:30 crc kubenswrapper[4756]: E0224 00:06:30.833324 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:30 crc kubenswrapper[4756]: E0224 00:06:30.833385 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.833131 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:30 crc kubenswrapper[4756]: E0224 00:06:30.833471 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.904888 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.904925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.904935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.904951 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:30 crc kubenswrapper[4756]: I0224 00:06:30.904961 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:30Z","lastTransitionTime":"2026-02-24T00:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.007723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.007807 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.007827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.007855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.007875 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.110310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.110737 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.110839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.111014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.111096 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.214217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.214271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.214280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.214299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.214311 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.317095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.317149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.317210 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.317237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.317262 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.420020 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.420100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.420113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.420142 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.420158 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.522242 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.522289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.522306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.522328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.522341 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.624935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.625259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.625405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.625505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.625595 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.729123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.729214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.729238 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.729272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.729295 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.800970 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:41:43.553856194 +0000 UTC Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.832040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.832124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.832142 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.832172 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.832189 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.935332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.935399 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.935414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.935437 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:31 crc kubenswrapper[4756]: I0224 00:06:31.935456 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:31Z","lastTransitionTime":"2026-02-24T00:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.039027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.039146 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.039162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.039183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.039198 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.141839 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.141900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.141915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.141940 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.141957 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.245451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.245554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.245581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.245620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.245645 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.347348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.347401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.347418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.347443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.347462 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.450515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.450593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.450612 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.450648 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.450670 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.554469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.554534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.554558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.554589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.554612 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.657698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.657749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.657766 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.657791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.657809 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.761461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.761504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.761515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.761535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.761547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.801829 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:48:40.464356328 +0000 UTC Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.832368 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.832411 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.832370 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:32 crc kubenswrapper[4756]: E0224 00:06:32.832617 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:32 crc kubenswrapper[4756]: E0224 00:06:32.832545 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:32 crc kubenswrapper[4756]: E0224 00:06:32.832809 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.864549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.864860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.864881 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.864948 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.864968 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.969291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.969581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.969676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.969786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.969880 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:32Z","lastTransitionTime":"2026-02-24T00:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.984020 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:06:32 crc kubenswrapper[4756]: I0224 00:06:32.984221 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.078872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.078945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.078965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.078993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.079014 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.183750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.183816 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.183834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.183863 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.183888 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.287156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.287211 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.287222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.287243 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.287255 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.390413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.390467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.390477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.390495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.390507 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.494284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.494380 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.494408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.494442 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.494467 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.597886 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.597970 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.597992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.598025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.598049 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.700563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.700650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.700677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.700709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.700733 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.801954 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 04:34:21.225276564 +0000 UTC Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.804099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.804325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.804498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.804701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.804883 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.847546 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.865543 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.882598 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.902637 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.908324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.908397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.908422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.908455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.908475 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:33Z","lastTransitionTime":"2026-02-24T00:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.920394 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.938517 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:33 crc kubenswrapper[4756]: I0224 00:06:33.956135 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.011731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.011808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.011832 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.011864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.011887 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.114954 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.115059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.115124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.115156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.115180 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.218150 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.218215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.218231 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.218259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.218277 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.321427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.321482 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.321499 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.321538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.321552 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.425190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.425242 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.425251 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.425275 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.425291 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.528973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.529537 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.529693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.529877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.530007 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.633631 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.633698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.633718 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.633748 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.633767 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.661633 4756 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.736304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.736373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.736391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.736417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.736435 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.803373 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 18:43:07.178881658 +0000 UTC Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.832922 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:34 crc kubenswrapper[4756]: E0224 00:06:34.833126 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.833212 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.833299 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:34 crc kubenswrapper[4756]: E0224 00:06:34.833463 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:34 crc kubenswrapper[4756]: E0224 00:06:34.833710 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.839117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.839304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.839414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.839508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.839598 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.942789 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.942836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.942850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.942870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:34 crc kubenswrapper[4756]: I0224 00:06:34.942883 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:34Z","lastTransitionTime":"2026-02-24T00:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.045930 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.046038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.046104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.046142 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.046163 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.149156 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.149218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.149236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.149259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.149273 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.252535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.252905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.253026 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.253194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.253338 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.328479 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.328552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.328581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.328618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.328640 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: E0224 00:06:35.345405 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.350873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.350918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.350932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.350988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.351007 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: E0224 00:06:35.361712 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.366558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.366619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.366647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.366681 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.366704 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: E0224 00:06:35.378718 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.383155 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.383213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.383235 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.383265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.383285 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: E0224 00:06:35.398607 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.403804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.403871 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.403890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.403917 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.403934 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: E0224 00:06:35.415021 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:35 crc kubenswrapper[4756]: E0224 00:06:35.415314 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.417486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.417529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.417547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.417578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.417599 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.521577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.521672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.521698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.521729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.521751 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.651244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.651321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.651335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.651361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.651377 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.754920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.754977 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.754988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.755009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.755027 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.804546 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 03:46:38.259468112 +0000 UTC Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.858139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.858207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.858225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.858252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.858272 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.960808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.961250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.961264 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.961285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:35 crc kubenswrapper[4756]: I0224 00:06:35.961299 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:35Z","lastTransitionTime":"2026-02-24T00:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.064444 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.064485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.064494 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.064514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.064530 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.093692 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.093788 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.107884 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.118115 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.129900 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.143773 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.154891 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.167973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.168274 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.168377 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.168118 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.168499 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.168759 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.179186 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.272375 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.272417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.272431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.272454 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.272469 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.375396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.375757 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.375916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.376022 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.376147 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.479301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.479348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.479359 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.479380 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.479396 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.582158 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.582198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.582208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.582223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.582235 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.657591 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.657695 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.657764 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:06:52.657739028 +0000 UTC m=+69.568601661 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.657808 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.657862 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:52.657851362 +0000 UTC m=+69.568713995 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.684747 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.684782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.684791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.684810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.684822 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.758102 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.758162 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.758195 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758370 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758392 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758406 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758463 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:52.758446878 +0000 UTC m=+69.669309521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758527 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758659 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:52.758632153 +0000 UTC m=+69.669494796 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758546 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758702 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758718 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.758754 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:06:52.758744607 +0000 UTC m=+69.669607250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.787270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.787310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.787323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.787341 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.787354 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.805657 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:34:10.239326185 +0000 UTC Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.833222 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.833414 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.833594 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.833763 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.833460 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:36 crc kubenswrapper[4756]: E0224 00:06:36.834137 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.889790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.889844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.889855 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.889874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.889888 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.992576 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.992623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.992637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.992665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:36 crc kubenswrapper[4756]: I0224 00:06:36.992684 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:36Z","lastTransitionTime":"2026-02-24T00:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.097529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.097607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.097637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.097667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.097694 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.200738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.200802 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.200820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.200843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.200860 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.303489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.303529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.303541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.303558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.303570 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.407165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.407232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.407254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.407285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.407303 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.511198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.511260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.511281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.511310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.511331 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.614315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.614373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.614390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.614414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.614433 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.717967 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.718034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.718050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.718100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.718115 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.806128 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:53:44.725776458 +0000 UTC Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.821013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.821093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.821111 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.821136 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.821149 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.924748 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.924798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.924810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.924830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:37 crc kubenswrapper[4756]: I0224 00:06:37.924844 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:37Z","lastTransitionTime":"2026-02-24T00:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.030650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.030762 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.030777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.030803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.030819 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.105311 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.128394 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.133515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.133562 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.133576 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.133598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.133611 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.146528 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.194839 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.210964 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.228180 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.236154 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.236210 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.236224 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.236247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.236268 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.243985 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.257369 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:38Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.338736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.338783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.338792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.338808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.338820 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.442052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.442138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.442159 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.442184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.442203 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.545834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.545900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.545915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.545940 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.545956 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.649431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.649478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.649489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.649512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.649526 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.752776 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.752829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.752838 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.752857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.752868 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.806533 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:02:56.611789873 +0000 UTC Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.832329 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:38 crc kubenswrapper[4756]: E0224 00:06:38.832579 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.832823 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:38 crc kubenswrapper[4756]: E0224 00:06:38.832992 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.832743 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:38 crc kubenswrapper[4756]: E0224 00:06:38.833335 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.855058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.855148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.855184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.855218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.855278 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.958236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.958306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.958320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.958341 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:38 crc kubenswrapper[4756]: I0224 00:06:38.958356 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:38Z","lastTransitionTime":"2026-02-24T00:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.061995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.062046 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.062056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.062086 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.062101 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.112162 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.135529 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.155974 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.165569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.165622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.165633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.165656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.165670 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.176393 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.195411 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.214439 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.229794 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.246596 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:39Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.269457 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.269526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.269543 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.269565 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.269582 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.373387 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.373445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.373461 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.373495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.373516 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.477604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.477663 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.477676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.477696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.477779 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.581197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.581254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.581267 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.581286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.581300 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.684784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.684830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.684842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.684860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.684872 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.787532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.787580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.787588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.787603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.787612 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.807338 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:14:27.493253717 +0000 UTC Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.833595 4756 scope.go:117] "RemoveContainer" containerID="7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.891003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.891061 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.891081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.891480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.891533 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.989682 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.994943 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.996006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.996106 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.996130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.996165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:39 crc kubenswrapper[4756]: I0224 00:06:39.996188 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:39Z","lastTransitionTime":"2026-02-24T00:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.003335 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.007400 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.024151 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.043725 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.063691 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.079548 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.094398 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.099821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.100113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.100247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.100397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.100491 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.109448 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.116594 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.117954 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.118555 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.124143 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.141494 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.153382 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.166293 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.188600 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.203092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.203237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.203313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.203392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.203464 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.210322 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.251444 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.272425 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.291529 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.306584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.306644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.306656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.306677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.306693 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.309482 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.325120 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.336997 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.359776 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.373594 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.388654 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.404115 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.408992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.409027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.409037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.409053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.409085 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.511512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.511854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.512252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.512328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.512398 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.615325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.615373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.615390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.615428 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.615450 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.663962 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.673654 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.682327 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.697981 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.713889 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.717947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.718115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.718185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.718257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.718329 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.729780 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.746566 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.768490 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.791288 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.808196 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:46:32.799200297 +0000 UTC Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.810767 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:40Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.820696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.820763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.820778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.820799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.820812 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.832309 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.832310 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.832466 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:40 crc kubenswrapper[4756]: E0224 00:06:40.832574 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:40 crc kubenswrapper[4756]: E0224 00:06:40.832651 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:40 crc kubenswrapper[4756]: E0224 00:06:40.832751 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.924356 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.924425 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.924443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.924469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:40 crc kubenswrapper[4756]: I0224 00:06:40.924487 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:40Z","lastTransitionTime":"2026-02-24T00:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.027230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.027278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.027291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.027310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.027321 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.130540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.130669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.130689 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.130718 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.130741 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.233037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.233110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.233123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.233140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.233152 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.336518 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.336575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.336593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.336615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.336633 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.440129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.440182 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.440197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.440218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.440231 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.543489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.543567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.543586 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.543613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.543631 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.647003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.647063 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.647093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.647117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.647135 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.749867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.749916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.749926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.749944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.749956 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.809286 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:26:11.611795025 +0000 UTC Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.853731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.853787 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.853803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.853822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.853837 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.956107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.956147 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.956158 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.956175 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:41 crc kubenswrapper[4756]: I0224 00:06:41.956186 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:41Z","lastTransitionTime":"2026-02-24T00:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.058488 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.058550 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.058573 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.058593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.058608 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.161363 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.161413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.161454 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.161472 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.161485 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.264854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.264910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.264922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.264942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.264957 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.368396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.368473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.368491 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.368519 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.368541 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.472252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.472334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.472353 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.472379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.472397 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.575372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.575443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.575467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.575497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.575518 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.678627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.678703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.678727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.678759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.678784 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.782618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.782790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.782830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.782862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.782884 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.810162 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 22:24:00.62468589 +0000 UTC Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.833128 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.833151 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:42 crc kubenswrapper[4756]: E0224 00:06:42.833341 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.833279 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:42 crc kubenswrapper[4756]: E0224 00:06:42.833465 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:42 crc kubenswrapper[4756]: E0224 00:06:42.833675 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.885896 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.885945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.885955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.885974 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.885986 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.988324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.988385 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.988402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.988432 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:42 crc kubenswrapper[4756]: I0224 00:06:42.988450 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:42Z","lastTransitionTime":"2026-02-24T00:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.092293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.092360 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.092371 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.092392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.092406 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.194835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.194910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.194928 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.194956 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.194974 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.298139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.298208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.298225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.298252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.298269 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.401631 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.401695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.401711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.401729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.401741 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.505497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.505558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.505572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.505593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.505606 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.611522 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.611829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.611849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.611879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.611892 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.669737 4756 csr.go:261] certificate signing request csr-z5s2r is approved, waiting to be issued Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.714288 4756 csr.go:257] certificate signing request csr-z5s2r is issued Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.715057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.715135 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.715149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.715174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.715187 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.810879 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 16:12:11.848382378 +0000 UTC Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.818127 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.818199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.818210 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.818229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.818240 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.872572 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.901198 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.914385 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.920264 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.920515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.920638 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.920717 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.920776 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:43Z","lastTransitionTime":"2026-02-24T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.928507 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.948999 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.966532 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.982450 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:43 crc kubenswrapper[4756]: I0224 00:06:43.997500 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.013183 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.023000 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.023039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.023050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.023089 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.023104 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.126835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.126932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.126961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.126987 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.127001 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.230682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.230743 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.230760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.230781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.230795 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.333818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.334411 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.334426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.334451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.334465 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.436955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.436995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.437005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.437019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.437028 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.539710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.540064 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.540180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.540269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.540356 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.643630 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.643674 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.643684 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.643706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.643719 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.716169 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 00:01:43 +0000 UTC, rotation deadline is 2026-11-17 23:05:10.549129514 +0000 UTC Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.716477 4756 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6406h58m25.832656927s for next certificate rotation Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.746605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.746934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.747008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.747091 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.747188 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.811727 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 11:02:09.648310421 +0000 UTC Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.832182 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.832246 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.832269 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:44 crc kubenswrapper[4756]: E0224 00:06:44.832343 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:44 crc kubenswrapper[4756]: E0224 00:06:44.832507 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:44 crc kubenswrapper[4756]: E0224 00:06:44.832741 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.850694 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.850734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.850747 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.850767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.850781 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.953348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.953420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.953438 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.953469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:44 crc kubenswrapper[4756]: I0224 00:06:44.953488 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:44Z","lastTransitionTime":"2026-02-24T00:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.056293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.056361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.056376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.056397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.056411 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.158850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.158897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.158908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.158926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.158937 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.261902 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.261945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.261955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.261970 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.261981 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.364962 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.365032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.365115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.365146 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.365167 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.427145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.427549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.427682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.427850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.428016 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: E0224 00:06:45.441792 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.446115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.446343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.446446 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.446581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.447272 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: E0224 00:06:45.461781 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.465635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.465834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.465943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.466032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.466198 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: E0224 00:06:45.479129 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.483316 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.483447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.483509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.483590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.483655 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: E0224 00:06:45.496177 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.499930 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.500045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.500162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.500254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.500338 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: E0224 00:06:45.514001 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:45Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:45 crc kubenswrapper[4756]: E0224 00:06:45.514351 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.516345 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.516400 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.516413 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.516435 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.516448 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.620541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.620606 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.620622 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.620647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.620661 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.723561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.723632 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.723646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.723667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.723684 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.812487 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 05:07:48.079223384 +0000 UTC Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.826904 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.826958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.826973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.826999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.827014 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.929662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.929715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.929728 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.929751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:45 crc kubenswrapper[4756]: I0224 00:06:45.929769 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:45Z","lastTransitionTime":"2026-02-24T00:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.033447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.033486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.033497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.033514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.033526 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.135869 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.135941 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.135953 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.135973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.135987 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.238350 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.238408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.238423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.238445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.238459 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.341275 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.341326 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.341344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.341368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.341388 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.444803 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.444853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.444867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.444890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.444905 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.547889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.547944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.547962 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.547984 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.548000 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.651858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.651955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.651985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.652018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.652040 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.754478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.754517 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.754529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.754546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.754557 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.813165 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 21:29:24.853329151 +0000 UTC Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.832553 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.832676 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:46 crc kubenswrapper[4756]: E0224 00:06:46.832801 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.833253 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:46 crc kubenswrapper[4756]: E0224 00:06:46.833332 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:46 crc kubenswrapper[4756]: E0224 00:06:46.833397 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.857014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.857089 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.857107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.857129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.857145 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.961214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.961269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.961282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.961302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:46 crc kubenswrapper[4756]: I0224 00:06:46.961317 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:46Z","lastTransitionTime":"2026-02-24T00:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.063547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.063595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.063607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.063626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.063662 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.165825 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.165878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.165895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.165915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.165930 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.268597 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.268642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.268654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.268671 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.268709 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.372116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.372161 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.372173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.372192 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.372204 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.474711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.474760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.474771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.474790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.474812 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.577211 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.577260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.577277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.577298 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.577310 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.680546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.680595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.680606 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.680626 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.680638 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.784169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.784222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.784233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.784251 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.784602 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.813882 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 15:29:15.331007904 +0000 UTC Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.888038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.888099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.888109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.888128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.888140 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.991372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.991434 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.991449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.991469 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:47 crc kubenswrapper[4756]: I0224 00:06:47.991483 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:47Z","lastTransitionTime":"2026-02-24T00:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.094814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.094878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.094895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.094939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.094953 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.197923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.197982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.198008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.198040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.198056 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.300540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.300589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.300604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.300620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.300630 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.402821 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.402891 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.402907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.402932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.402948 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.506205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.506250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.506260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.506279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.506291 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.609225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.609259 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.609269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.609285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.609295 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.711998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.712151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.712180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.712217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.712242 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.814371 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:11:09.758024624 +0000 UTC Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.816037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.816095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.816105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.816124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.816135 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.832785 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.832929 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:48 crc kubenswrapper[4756]: E0224 00:06:48.832980 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:48 crc kubenswrapper[4756]: E0224 00:06:48.833206 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.833323 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:48 crc kubenswrapper[4756]: E0224 00:06:48.833414 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.918994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.919144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.919162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.919180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:48 crc kubenswrapper[4756]: I0224 00:06:48.919191 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:48Z","lastTransitionTime":"2026-02-24T00:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.022391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.022451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.022468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.022494 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.022514 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.125381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.125466 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.125493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.125524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.125549 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.228168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.228230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.228244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.228268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.228284 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.331018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.331090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.331102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.331116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.331129 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.433281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.433322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.433332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.433349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.433361 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.536140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.536177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.536186 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.536204 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.536215 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.638837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.638909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.638927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.638953 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.638969 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.746712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.746786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.746804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.746829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.746844 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.815597 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:36:53.31967506 +0000 UTC Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.849458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.849508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.849522 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.849548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.849564 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.953549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.953612 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.953632 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.953656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:49 crc kubenswrapper[4756]: I0224 00:06:49.953669 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:49Z","lastTransitionTime":"2026-02-24T00:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.057599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.057680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.057695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.057716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.057729 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.160593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.160656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.160668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.160707 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.160720 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.264101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.264183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.264199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.264223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.264239 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.367482 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.367536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.367554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.367585 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.367603 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.471002 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.471042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.471052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.471084 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.471094 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.573844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.573909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.573931 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.573956 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.573981 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.677615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.677667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.677679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.677701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.677715 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.780113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.780161 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.780170 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.780190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.780201 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.816760 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:24:46.055064372 +0000 UTC Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.832241 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.832269 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.832359 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:50 crc kubenswrapper[4756]: E0224 00:06:50.832426 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:50 crc kubenswrapper[4756]: E0224 00:06:50.832496 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:50 crc kubenswrapper[4756]: E0224 00:06:50.832616 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.883862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.883919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.883932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.883954 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.883967 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.987179 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.987234 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.987250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.987272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:50 crc kubenswrapper[4756]: I0224 00:06:50.987286 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:50Z","lastTransitionTime":"2026-02-24T00:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.090250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.090591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.090679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.090783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.090848 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.193391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.193696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.193790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.193939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.194045 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.297669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.297709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.297721 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.297740 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.297752 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.400568 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.400634 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.400672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.400698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.400713 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.502129 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.503474 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.503520 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.503533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.503554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.503572 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.518550 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.530831 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.543133 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.553234 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.565584 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.578400 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.589158 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.599293 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.608890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.608965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.608976 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.608992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.609002 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.612833 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:51Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.711665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.711711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.711723 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.711741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.711753 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.815098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.815148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.815157 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.815176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.815186 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.817200 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:49:28.302328601 +0000 UTC Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.917628 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.917666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.917674 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.917689 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:51 crc kubenswrapper[4756]: I0224 00:06:51.917700 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:51Z","lastTransitionTime":"2026-02-24T00:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.020441 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.020490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.020502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.020521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.020534 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.123472 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.123530 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.123541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.123563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.123575 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.226247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.226321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.226338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.226366 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.226385 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.329817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.330100 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.330116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.330139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.330157 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.433808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.434059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.434096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.434123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.434140 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.538117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.538170 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.538181 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.538201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.538215 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.641126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.641164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.641176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.641201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.641217 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.712223 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.712404 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.712431 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:07:24.71238768 +0000 UTC m=+101.623250383 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.712553 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.712623 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:24.712609097 +0000 UTC m=+101.623471730 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.744594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.744629 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.744639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.744655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.744667 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.813631 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.813760 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.813812 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.814014 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.814173 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:24.814141784 +0000 UTC m=+101.725004457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.814631 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.814683 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.814708 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.814771 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:24.814750623 +0000 UTC m=+101.725613296 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.815286 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.815333 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.815354 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.815430 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:24.815406603 +0000 UTC m=+101.726269266 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.818402 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:12:56.618357093 +0000 UTC Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.832260 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.832333 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.832267 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.832433 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.832541 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:52 crc kubenswrapper[4756]: E0224 00:06:52.832655 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.847789 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.847884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.847904 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.847932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.847953 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.951777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.951868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.951895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.951929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:52 crc kubenswrapper[4756]: I0224 00:06:52.951956 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:52Z","lastTransitionTime":"2026-02-24T00:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.054738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.054812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.054827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.054845 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.054858 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.156844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.156916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.156933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.156959 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.156980 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.260135 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.260192 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.260205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.260228 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.260241 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.363961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.364048 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.364112 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.364140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.364161 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.469646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.469703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.469714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.469731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.469741 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.572641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.572688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.572709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.572730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.572754 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.596029 4756 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.676248 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.676300 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.676311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.676333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.676347 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.779596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.779660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.779669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.779685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.779695 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.819267 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 17:54:18.320246067 +0000 UTC Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.850907 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.869652 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.882597 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.882677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.882688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.882711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.882766 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.886173 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.899023 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.913745 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.932057 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.946884 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.963398 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.977873 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:53Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.985033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.985107 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.985119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.985138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:53 crc kubenswrapper[4756]: I0224 00:06:53.985152 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:53Z","lastTransitionTime":"2026-02-24T00:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.088151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.088212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.088232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.088257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.088276 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.191251 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.191311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.191325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.191343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.191356 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.294484 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.294537 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.294550 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.294572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.294587 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.398518 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.398594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.398612 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.399028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.399112 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.502206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.502244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.502256 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.502271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.502280 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.605459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.605505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.605516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.605540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.605556 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.708585 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.708627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.708640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.708662 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.708676 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.811419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.811527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.811556 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.811592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.811619 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.820445 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:05:10.18465026 +0000 UTC Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.833137 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.833208 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:54 crc kubenswrapper[4756]: E0224 00:06:54.833280 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.833297 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:54 crc kubenswrapper[4756]: E0224 00:06:54.833371 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:54 crc kubenswrapper[4756]: E0224 00:06:54.833450 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.914260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.914299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.914311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.914327 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:54 crc kubenswrapper[4756]: I0224 00:06:54.914341 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:54Z","lastTransitionTime":"2026-02-24T00:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.017335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.017405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.017424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.017451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.017468 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.119913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.119955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.119969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.119988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.120003 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.222808 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.222880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.222892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.222914 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.222925 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.325999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.326055 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.326096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.326119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.326137 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.429676 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.429753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.429777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.429841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.429876 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.533567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.533639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.533659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.533687 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.533705 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.637056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.637144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.637158 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.637180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.637194 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.740968 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.741029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.741047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.741113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.741133 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.820915 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:12:26.29715845 +0000 UTC Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.826921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.826964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.826981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.827002 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.827016 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: E0224 00:06:55.843782 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.849364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.849408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.849420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.849438 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.849451 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: E0224 00:06:55.864696 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.869414 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.869460 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.869476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.869493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.869507 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: E0224 00:06:55.899100 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.905526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.905578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.905593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.905614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.905630 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: E0224 00:06:55.926057 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.931039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.931083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.931097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.931110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.931121 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:55 crc kubenswrapper[4756]: E0224 00:06:55.949536 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:55Z is after 2025-08-24T17:21:41Z" Feb 24 00:06:55 crc kubenswrapper[4756]: E0224 00:06:55.949865 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.951801 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.951864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.951879 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.951901 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:55 crc kubenswrapper[4756]: I0224 00:06:55.951916 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:55Z","lastTransitionTime":"2026-02-24T00:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.054576 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.054971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.055087 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.055190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.055295 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.158357 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.158416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.158425 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.158443 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.158454 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.261602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.261647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.261660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.261682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.261702 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.364673 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.364750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.364760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.364786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.364799 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.467754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.467813 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.467832 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.467854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.467868 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.570800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.570853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.570865 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.570882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.570894 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.673340 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.673687 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.673773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.673878 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.673963 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.775822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.776167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.776245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.776314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.776377 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.821411 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:15:25.958287838 +0000 UTC Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.832767 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.832768 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:56 crc kubenswrapper[4756]: E0224 00:06:56.832925 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.832941 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:56 crc kubenswrapper[4756]: E0224 00:06:56.832964 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:56 crc kubenswrapper[4756]: E0224 00:06:56.833006 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.879201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.879495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.879578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.879673 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.879747 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.982031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.982093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.982105 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.982122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:56 crc kubenswrapper[4756]: I0224 00:06:56.982134 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:56Z","lastTransitionTime":"2026-02-24T00:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.085534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.085591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.085602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.085624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.085635 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.188268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.188306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.188316 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.188332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.188344 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.291285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.291324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.291336 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.291358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.291371 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.393947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.393989 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.394001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.394019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.394035 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.496141 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.496194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.496208 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.496229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.496245 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.599016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.599049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.599057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.599093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.599102 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.701477 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.701552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.701566 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.701590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.701608 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.804877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.804933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.804943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.804963 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.804976 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.822286 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 15:21:54.764996352 +0000 UTC Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.907424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.907465 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.907478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.907495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:57 crc kubenswrapper[4756]: I0224 00:06:57.907506 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:57Z","lastTransitionTime":"2026-02-24T00:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.011418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.011470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.011481 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.011501 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.011514 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.114740 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.114801 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.114812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.114835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.114848 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.217746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.217822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.217835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.217858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.217876 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.321458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.321514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.321527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.321549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.321562 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.424271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.424349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.424375 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.424411 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.424436 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.527405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.527492 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.527516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.527547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.527568 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.631028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.631124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.631146 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.631171 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.631189 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.734656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.734724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.734746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.734774 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.734793 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.822942 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:48:47.968848408 +0000 UTC Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.832464 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.832529 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.832616 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:06:58 crc kubenswrapper[4756]: E0224 00:06:58.832739 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:06:58 crc kubenswrapper[4756]: E0224 00:06:58.832872 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:06:58 crc kubenswrapper[4756]: E0224 00:06:58.832972 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.837599 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.837658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.837681 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.837708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.837732 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.940866 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.940979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.941031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.941108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:58 crc kubenswrapper[4756]: I0224 00:06:58.941130 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:58Z","lastTransitionTime":"2026-02-24T00:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.044860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.044953 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.044991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.045023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.045040 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.150016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.150181 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.150260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.150297 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.150320 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.253926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.254003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.254016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.254036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.254106 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.356893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.356981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.356994 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.357016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.357029 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.459651 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.459701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.459720 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.459739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.459793 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.563232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.563304 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.563329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.563359 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.563378 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.666656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.666731 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.666755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.666784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.666804 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.770424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.770498 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.770519 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.770549 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.770571 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.824041 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 15:28:28.067240887 +0000 UTC Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.874425 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.874476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.874493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.874510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.874523 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.977361 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.977426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.977441 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.977459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:06:59 crc kubenswrapper[4756]: I0224 00:06:59.977470 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:06:59Z","lastTransitionTime":"2026-02-24T00:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.080922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.081018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.081035 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.081060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.081106 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.184044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.184145 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.184169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.184196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.184216 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.287664 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.287725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.287746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.287771 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.287787 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.390257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.390311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.390323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.390343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.390356 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.492917 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.492984 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.493001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.493029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.493049 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.595799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.595859 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.595874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.595899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.595917 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.699828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.699907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.699926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.699953 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.699972 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.803207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.803279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.803299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.803328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.803347 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.824935 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:53:46.305452932 +0000 UTC Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.832293 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.832334 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.832490 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:00 crc kubenswrapper[4756]: E0224 00:07:00.832671 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:00 crc kubenswrapper[4756]: E0224 00:07:00.832889 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:00 crc kubenswrapper[4756]: E0224 00:07:00.833048 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.906661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.906737 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.906761 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.906791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:00 crc kubenswrapper[4756]: I0224 00:07:00.906811 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:00Z","lastTransitionTime":"2026-02-24T00:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.010633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.010695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.010714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.010738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.010758 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.113881 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.113967 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.113987 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.114015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.114032 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.217850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.217957 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.217978 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.218008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.218031 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.321410 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.321495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.321513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.321538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.321557 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.424476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.424551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.424569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.424606 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.424627 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.528534 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.528592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.528604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.528625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.528640 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.631649 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.631748 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.631778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.631814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.631839 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.735198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.735268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.735293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.735325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.735346 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.825470 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 02:55:28.51003788 +0000 UTC Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.838591 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.838655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.838667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.838707 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.838720 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.941864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.941920 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.941939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.941965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:01 crc kubenswrapper[4756]: I0224 00:07:01.941988 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:01Z","lastTransitionTime":"2026-02-24T00:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.046131 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.046191 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.046207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.046264 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.046281 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.149129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.149426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.149539 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.149606 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.149660 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.252262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.252355 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.252384 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.252416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.252440 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.355119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.355178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.355197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.355223 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.355240 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.459386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.459888 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.460116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.460283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.460505 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.564226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.564781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.564924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.565103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.565258 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.668806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.668860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.668877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.668907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.668926 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.771969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.772314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.772409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.772493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.772577 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.826307 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:14:11.528585131 +0000 UTC Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.832689 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.832749 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:02 crc kubenswrapper[4756]: E0224 00:07:02.833017 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:02 crc kubenswrapper[4756]: E0224 00:07:02.833237 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.833416 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:02 crc kubenswrapper[4756]: E0224 00:07:02.833773 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.875331 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.875402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.875421 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.875449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.875469 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.978104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.978152 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.978168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.978193 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:02 crc kubenswrapper[4756]: I0224 00:07:02.978211 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:02Z","lastTransitionTime":"2026-02-24T00:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.081404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.081452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.081467 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.081489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.081506 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.183885 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.183935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.183946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.183965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.183977 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.287246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.287306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.287324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.287348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.287364 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.391303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.391370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.391389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.391417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.391434 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.494936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.495015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.495034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.495096 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.495117 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.599515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.599603 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.599623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.599649 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.599669 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.703019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.703116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.703137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.703164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.703182 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.806672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.806725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.806736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.806751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.806762 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.826764 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 00:30:22.872701151 +0000 UTC Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.856758 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.876391 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.897645 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.910262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.910344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.910376 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.910409 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.910434 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:03Z","lastTransitionTime":"2026-02-24T00:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.919107 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.939355 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.959753 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:03 crc kubenswrapper[4756]: I0224 00:07:03.981870 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:03Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.003745 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.013382 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.013545 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.013670 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.013799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.013938 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.023159 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:04Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.117581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.117684 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.117755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.117788 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.117850 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.220916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.221008 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.221033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.221093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.221113 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.324303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.324680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.324767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.324859 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.324949 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.428613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.428699 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.428716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.428733 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.428746 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.531846 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.531919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.531939 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.531969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.531989 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.635695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.635754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.635765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.635784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.635800 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.739659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.740180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.740392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.740540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.740684 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.827537 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 06:33:21.347588785 +0000 UTC Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.832999 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.833087 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.833014 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:04 crc kubenswrapper[4756]: E0224 00:07:04.833299 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:04 crc kubenswrapper[4756]: E0224 00:07:04.833455 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:04 crc kubenswrapper[4756]: E0224 00:07:04.833612 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.843980 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.844095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.844117 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.844177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.844196 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.948711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.948804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.948823 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.948851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:04 crc kubenswrapper[4756]: I0224 00:07:04.948872 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:04Z","lastTransitionTime":"2026-02-24T00:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.052480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.052564 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.052594 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.052659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.052685 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.155921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.155985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.156004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.156036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.156060 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.264421 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.264784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.265427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.265459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.265477 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.370291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.370341 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.370351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.370369 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.370381 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.473544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.473609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.473628 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.473653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.473668 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.577036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.577126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.577140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.577164 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.577208 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.680658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.680712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.680727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.680749 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.680762 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.783007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.783085 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.783099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.783121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.783138 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.827920 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:23:45.935293075 +0000 UTC Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.888313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.888358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.888371 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.888393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.888409 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.970985 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.971116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.971149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.971194 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.971230 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:05 crc kubenswrapper[4756]: E0224 00:07:05.991454 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:05Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.997147 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.997203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.997217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.997244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:05 crc kubenswrapper[4756]: I0224 00:07:05.997262 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:05Z","lastTransitionTime":"2026-02-24T00:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: E0224 00:07:06.017749 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.023422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.023483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.023500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.023528 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.023546 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: E0224 00:07:06.040458 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.044637 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.044667 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.044679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.044700 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.044715 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: E0224 00:07:06.062736 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.068496 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.068604 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.068627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.068654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.068671 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: E0224 00:07:06.087005 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:06Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:06 crc kubenswrapper[4756]: E0224 00:07:06.087202 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.089698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.089768 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.089785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.089807 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.089823 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.193252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.193320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.193333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.193362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.193379 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.296866 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.296926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.296938 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.296960 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.296977 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.399871 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.399922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.399935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.399954 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.399968 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.503837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.503901 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.503919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.503943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.503958 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.606684 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.606743 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.606754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.606775 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.606790 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.710844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.710913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.710925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.710950 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.710965 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.814427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.814485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.814502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.814526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.814547 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.828321 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:17:20.891459049 +0000 UTC Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.832836 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.832836 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.832968 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:06 crc kubenswrapper[4756]: E0224 00:07:06.833170 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:06 crc kubenswrapper[4756]: E0224 00:07:06.833325 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:06 crc kubenswrapper[4756]: E0224 00:07:06.833440 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.918494 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.918553 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.918571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.918598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:06 crc kubenswrapper[4756]: I0224 00:07:06.918619 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:06Z","lastTransitionTime":"2026-02-24T00:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.022475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.022541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.022554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.022579 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.022594 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.125546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.125647 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.125672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.125703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.125725 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.228709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.228794 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.228818 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.228847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.228868 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.333295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.333374 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.333393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.333420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.333440 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.437468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.437543 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.437564 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.437592 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.437612 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.540979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.541047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.541095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.541127 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.541147 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.644276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.644329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.644343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.644362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.644376 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.748130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.748226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.748245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.748271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.748288 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.829253 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:27:30.510916188 +0000 UTC Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.851308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.851375 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.851392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.851418 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.851434 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.954212 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.954299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.954321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.954348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:07 crc kubenswrapper[4756]: I0224 00:07:07.954367 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:07Z","lastTransitionTime":"2026-02-24T00:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.057774 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.057844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.057864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.057891 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.057908 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.160863 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.160935 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.160952 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.160979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.161000 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.264375 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.264440 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.264458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.264521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.264546 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.368333 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.368451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.368487 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.368519 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.368538 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.471777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.471847 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.471857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.471877 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.471892 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.574303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.574362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.574377 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.574399 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.574411 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.678148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.678199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.678211 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.678233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.678248 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.781868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.781944 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.781965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.781995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.782057 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.829407 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 06:45:28.27742925 +0000 UTC Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.832923 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:08 crc kubenswrapper[4756]: E0224 00:07:08.833222 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.833281 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.833317 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:08 crc kubenswrapper[4756]: E0224 00:07:08.833429 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:08 crc kubenswrapper[4756]: E0224 00:07:08.833570 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.885131 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.885244 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.885268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.885299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.885320 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.988608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.988668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.988684 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.988710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:08 crc kubenswrapper[4756]: I0224 00:07:08.988729 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:08Z","lastTransitionTime":"2026-02-24T00:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.092203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.092267 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.092285 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.092314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.092337 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.195388 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.195473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.195500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.195532 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.195555 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.299593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.299650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.299661 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.299685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.299699 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.403001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.403113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.403134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.403167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.403203 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.507330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.507412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.507445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.507475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.507499 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.610817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.610981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.611009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.611042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.611096 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.713816 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.713910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.713936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.713976 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.714024 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.817447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.817517 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.817535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.817562 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.817581 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.830123 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:26:51.813157705 +0000 UTC Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.920722 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.920781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.920791 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.920836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:09 crc kubenswrapper[4756]: I0224 00:07:09.920847 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:09Z","lastTransitionTime":"2026-02-24T00:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.024001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.024055 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.024093 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.024114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.024127 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.126938 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.126991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.127001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.127024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.127038 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.230704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.230763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.230777 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.230799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.230815 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.334516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.334593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.334615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.334685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.334709 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.437827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.437891 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.437903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.437925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.437941 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.540919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.540986 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.541001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.541026 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.541041 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.643992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.644050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.644065 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.644103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.644117 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.747218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.747302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.747332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.747366 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.747451 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.831307 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:54:24.566475549 +0000 UTC Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.832592 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.832613 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.832592 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:10 crc kubenswrapper[4756]: E0224 00:07:10.832736 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:10 crc kubenswrapper[4756]: E0224 00:07:10.832854 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:10 crc kubenswrapper[4756]: E0224 00:07:10.832957 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.850559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.850598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.850607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.850624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.850635 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.954946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.955011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.955021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.955039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:10 crc kubenswrapper[4756]: I0224 00:07:10.955049 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:10Z","lastTransitionTime":"2026-02-24T00:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.058245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.058303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.058320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.058345 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.058365 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.161857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.161909 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.161924 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.161952 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.161968 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.270207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.270286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.270310 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.270346 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.270373 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.374102 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.374174 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.374196 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.374224 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.374243 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.477706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.477776 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.477797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.477826 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.477847 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.582119 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.582187 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.582206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.582237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.582256 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.685552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.685602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.685613 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.685633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.685645 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.789094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.789154 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.789168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.789190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.789203 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.832223 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:56:28.489458028 +0000 UTC Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.892470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.892548 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.892570 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.892597 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.892621 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.996407 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.996505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.996526 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.996557 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:11 crc kubenswrapper[4756]: I0224 00:07:11.996577 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:11Z","lastTransitionTime":"2026-02-24T00:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.099736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.099836 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.099868 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.099903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.099931 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.203447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.203520 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.203541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.203569 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.203595 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.306233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.306307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.306327 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.306359 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.306387 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.409468 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.409554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.409579 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.409608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.409630 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.513013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.513131 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.513151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.513176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.513197 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.617253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.617392 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.617416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.617447 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.617467 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.719942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.719991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.720004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.720024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.720037 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.822889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.822969 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.822988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.823013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.823033 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.832439 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 07:22:51.287694543 +0000 UTC Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.832612 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.832672 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.832697 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:12 crc kubenswrapper[4756]: E0224 00:07:12.832825 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:12 crc kubenswrapper[4756]: E0224 00:07:12.832965 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:12 crc kubenswrapper[4756]: E0224 00:07:12.833139 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.927205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.927263 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.927276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.927299 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:12 crc kubenswrapper[4756]: I0224 00:07:12.927313 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:12Z","lastTransitionTime":"2026-02-24T00:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.030643 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.030689 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.030706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.030727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.030743 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.133293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.133352 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.133368 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.133391 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.133409 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.236499 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.236958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.237168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.237341 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.237554 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.340544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.340601 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.340615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.340665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.340681 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.444177 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.444221 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.444233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.444253 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.444268 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.547445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.547493 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.547505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.547527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.547541 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.651132 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.651189 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.651203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.651230 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.651247 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.754424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.754470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.754483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.754501 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.754511 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.832744 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:51:21.887990685 +0000 UTC Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.856901 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.856947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.856960 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.856981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.856994 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.857530 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.877278 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.896010 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.914845 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.932015 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.952710 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.958756 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.958802 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.958819 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.958844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.958863 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:13Z","lastTransitionTime":"2026-02-24T00:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.974644 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:13 crc kubenswrapper[4756]: I0224 00:07:13.992121 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:13Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.007850 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:14Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.062528 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.062581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.062593 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.062615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.062629 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.166586 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.166657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.166683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.166722 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.166746 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.269780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.270039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.270057 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.270120 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.270141 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.373393 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.373444 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.373456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.373478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.373496 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.476789 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.476859 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.476880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.476907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.476924 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.579445 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.579497 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.579509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.579576 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.579589 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.683184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.683642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.683806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.683964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.684133 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.787403 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.787456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.787466 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.787487 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.787500 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.832906 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:14 crc kubenswrapper[4756]: E0224 00:07:14.834877 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.832965 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:08:33.344007099 +0000 UTC Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.833564 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:14 crc kubenswrapper[4756]: E0224 00:07:14.836136 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.833008 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:14 crc kubenswrapper[4756]: E0224 00:07:14.836921 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.852706 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.891139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.891455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.891623 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.891759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.891891 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.995609 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.995688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.995709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.995741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:14 crc kubenswrapper[4756]: I0224 00:07:14.995762 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:14Z","lastTransitionTime":"2026-02-24T00:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.099295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.099349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.099363 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.099387 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.099402 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.202352 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.202396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.202406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.202424 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.202437 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.304829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.304893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.304907 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.304926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.304940 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.407792 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.407857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.407876 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.407902 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.407923 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.510916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.511019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.511041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.511118 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.511159 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.614981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.615042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.615054 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.615108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.615123 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.719049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.719144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.719166 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.719197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.719215 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.822168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.822245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.822264 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.822291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.822310 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.836113 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:46:55.838048471 +0000 UTC Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.926183 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.926260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.926284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.926308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:15 crc kubenswrapper[4756]: I0224 00:07:15.926327 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:15Z","lastTransitionTime":"2026-02-24T00:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.030185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.030265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.030289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.030313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.030331 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.132849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.133203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.133292 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.133362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.133424 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.236778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.236856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.236883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.236914 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.236938 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.341558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.341921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.341990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.342058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.342144 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.375983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.376045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.376083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.376116 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.376167 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.389446 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.394798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.394947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.395028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.395124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.395202 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.407697 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.413484 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.413524 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.413540 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.413564 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.413584 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.433354 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.440001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.440053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.440070 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.440110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.440124 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.466102 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.471767 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.472137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.472608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.472753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.472909 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.493016 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:16Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.493275 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.496483 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.496543 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.496557 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.496581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.496596 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.599685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.599755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.599778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.599806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.599825 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.703544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.703908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.704154 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.704460 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.704702 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.808031 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.808452 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.808588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.808748 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.808921 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.832705 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.833179 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.833412 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.833870 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.833515 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:16 crc kubenswrapper[4756]: E0224 00:07:16.834476 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.836675 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:27:48.842348629 +0000 UTC Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.847880 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.911650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.911706 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.911719 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.911742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:16 crc kubenswrapper[4756]: I0224 00:07:16.911753 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:16Z","lastTransitionTime":"2026-02-24T00:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.015203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.015265 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.015282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.015307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.015325 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.118559 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.118831 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.118958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.119090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.119197 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.222830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.223129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.223215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.223312 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.223380 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.326215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.326298 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.326318 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.326348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.326369 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.429998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.430050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.430081 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.430103 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.430114 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.533345 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.533522 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.533546 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.533577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.533597 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.637056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.637514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.637752 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.638050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.638331 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.741475 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.741528 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.741541 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.741563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.741577 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.837080 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:05:01.583816051 +0000 UTC Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.843835 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.843888 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.843905 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.843927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.843946 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.947024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.947146 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.947171 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.947207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:17 crc kubenswrapper[4756]: I0224 00:07:17.947232 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:17Z","lastTransitionTime":"2026-02-24T00:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.050459 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.050504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.050514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.050535 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.050546 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.152790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.152830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.152842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.152860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.152870 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.255960 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.256014 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.256027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.256049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.256081 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.359381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.359451 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.359476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.359509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.359533 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.462176 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.462215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.462226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.462245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.462256 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.565029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.565066 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.565098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.565113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.565123 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.667789 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.667844 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.667863 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.667889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.667908 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.771143 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.771195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.771205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.771226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.771237 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.832619 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.832682 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:18 crc kubenswrapper[4756]: E0224 00:07:18.832771 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.832619 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:18 crc kubenswrapper[4756]: E0224 00:07:18.832937 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:18 crc kubenswrapper[4756]: E0224 00:07:18.833000 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.837787 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:07:27.719992092 +0000 UTC Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.873707 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.873739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.873766 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.873782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.873794 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.976013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.976073 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.976087 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.976104 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:18 crc kubenswrapper[4756]: I0224 00:07:18.976117 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:18Z","lastTransitionTime":"2026-02-24T00:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.079703 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.079788 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.079810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.079843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.079876 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.183715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.183781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.183799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.183827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.183846 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.287283 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.287332 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.287344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.287364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.287378 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.390536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.390610 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.390629 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.390658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.390677 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.493313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.493369 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.493379 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.493402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.493413 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.596552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.596607 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.596618 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.596638 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.596657 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.698897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.698958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.698974 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.698997 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.699012 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.801725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.801780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.801797 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.801817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.801830 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.838157 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 07:17:57.322255402 +0000 UTC Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.904478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.904527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.904539 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.904558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:19 crc kubenswrapper[4756]: I0224 00:07:19.904569 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:19Z","lastTransitionTime":"2026-02-24T00:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.007420 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.007504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.007536 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.007570 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.007593 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.110226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.110284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.110296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.110317 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.110331 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.213754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.213796 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.213806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.213822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.213834 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.316438 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.316485 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.316499 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.316518 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.316530 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.419005 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.419083 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.419095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.419114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.419125 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.521975 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.522043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.522059 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.522112 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.522130 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.624934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.624993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.625007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.625028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.625042 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.728334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.728390 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.728407 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.728432 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.728448 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.831663 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.831713 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.831724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.831742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.831753 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.832754 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.832827 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:20 crc kubenswrapper[4756]: E0224 00:07:20.832888 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.832827 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:20 crc kubenswrapper[4756]: E0224 00:07:20.833042 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:20 crc kubenswrapper[4756]: E0224 00:07:20.833147 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.839132 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:28:06.910959785 +0000 UTC Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.935739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.935830 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.935843 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.935866 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:20 crc kubenswrapper[4756]: I0224 00:07:20.935881 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:20Z","lastTransitionTime":"2026-02-24T00:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.038525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.038567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.038578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.038596 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.038610 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.140973 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.141018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.141030 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.141049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.141077 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.243828 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.243880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.243897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.243922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.243942 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.347197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.347276 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.347296 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.347325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.347345 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.449965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.450018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.450030 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.450050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.450084 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.553180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.553233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.553249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.553270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.553283 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.655971 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.656315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.656401 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.656502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.656591 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.759750 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.760115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.760236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.760384 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.760496 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.839981 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 02:33:08.118824619 +0000 UTC Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.863865 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.863942 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.863961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.863988 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.864006 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.968041 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.968362 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.968457 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.968533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:21 crc kubenswrapper[4756]: I0224 00:07:21.968594 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:21Z","lastTransitionTime":"2026-02-24T00:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.071705 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.071765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.071783 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.071810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.071829 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.174185 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.174273 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.174293 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.174322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.174340 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.284627 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.284690 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.284710 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.284738 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.284761 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.379169 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-64mrj"] Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.379924 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-qb88h"] Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.380525 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.381215 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-64mrj" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.385616 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.386018 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.386130 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.386337 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.387965 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.388329 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.388362 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.388421 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.391933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.392227 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.392320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.392419 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.392532 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.406354 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.428332 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.448254 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.464782 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.484104 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.495841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.495897 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.495916 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.495945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.495967 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.499309 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/071714b1-b44e-4085-adf5-0ed6b6e64af3-rootfs\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.499480 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zks4n\" (UniqueName: \"kubernetes.io/projected/b884f6c3-73b0-42a3-b301-c56e0043cd70-kube-api-access-zks4n\") pod \"node-resolver-64mrj\" (UID: \"b884f6c3-73b0-42a3-b301-c56e0043cd70\") " pod="openshift-dns/node-resolver-64mrj" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.499583 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxn24\" (UniqueName: \"kubernetes.io/projected/071714b1-b44e-4085-adf5-0ed6b6e64af3-kube-api-access-sxn24\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.499702 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/071714b1-b44e-4085-adf5-0ed6b6e64af3-proxy-tls\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.499821 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b884f6c3-73b0-42a3-b301-c56e0043cd70-hosts-file\") pod \"node-resolver-64mrj\" (UID: \"b884f6c3-73b0-42a3-b301-c56e0043cd70\") " pod="openshift-dns/node-resolver-64mrj" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.499913 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/071714b1-b44e-4085-adf5-0ed6b6e64af3-mcd-auth-proxy-config\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.502388 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.516908 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.550229 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.571415 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.591449 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.598992 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.599050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.599099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.599127 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.599146 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.601574 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/071714b1-b44e-4085-adf5-0ed6b6e64af3-proxy-tls\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.601659 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b884f6c3-73b0-42a3-b301-c56e0043cd70-hosts-file\") pod \"node-resolver-64mrj\" (UID: \"b884f6c3-73b0-42a3-b301-c56e0043cd70\") " pod="openshift-dns/node-resolver-64mrj" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.601694 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/071714b1-b44e-4085-adf5-0ed6b6e64af3-mcd-auth-proxy-config\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.601750 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zks4n\" (UniqueName: \"kubernetes.io/projected/b884f6c3-73b0-42a3-b301-c56e0043cd70-kube-api-access-zks4n\") pod \"node-resolver-64mrj\" (UID: \"b884f6c3-73b0-42a3-b301-c56e0043cd70\") " pod="openshift-dns/node-resolver-64mrj" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.601778 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/071714b1-b44e-4085-adf5-0ed6b6e64af3-rootfs\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.601803 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxn24\" (UniqueName: \"kubernetes.io/projected/071714b1-b44e-4085-adf5-0ed6b6e64af3-kube-api-access-sxn24\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.601863 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b884f6c3-73b0-42a3-b301-c56e0043cd70-hosts-file\") pod \"node-resolver-64mrj\" (UID: \"b884f6c3-73b0-42a3-b301-c56e0043cd70\") " pod="openshift-dns/node-resolver-64mrj" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.602012 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/071714b1-b44e-4085-adf5-0ed6b6e64af3-rootfs\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.603141 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/071714b1-b44e-4085-adf5-0ed6b6e64af3-mcd-auth-proxy-config\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.609842 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.613048 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/071714b1-b44e-4085-adf5-0ed6b6e64af3-proxy-tls\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.625281 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.631340 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxn24\" (UniqueName: \"kubernetes.io/projected/071714b1-b44e-4085-adf5-0ed6b6e64af3-kube-api-access-sxn24\") pod \"machine-config-daemon-qb88h\" (UID: \"071714b1-b44e-4085-adf5-0ed6b6e64af3\") " pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.631534 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zks4n\" (UniqueName: \"kubernetes.io/projected/b884f6c3-73b0-42a3-b301-c56e0043cd70-kube-api-access-zks4n\") pod \"node-resolver-64mrj\" (UID: \"b884f6c3-73b0-42a3-b301-c56e0043cd70\") " pod="openshift-dns/node-resolver-64mrj" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.643593 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.662033 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.675471 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.688827 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.703022 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.703097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.703110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.703134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.703148 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.704515 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.710036 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.719532 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.722482 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-64mrj" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.737953 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.754589 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.767621 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dvdfz"] Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.769225 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.772396 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5xm6s"] Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.773497 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-f8vwm"] Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.773947 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.774727 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.774896 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.778025 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.779823 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.782623 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.783054 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.784283 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.784301 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.784331 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.784552 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.784623 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.784636 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.784723 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.787207 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.790114 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.790217 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.806169 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.806206 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.806215 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.806231 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.806241 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.814605 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.829037 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.832271 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:22 crc kubenswrapper[4756]: E0224 00:07:22.832387 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.832573 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.832588 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:22 crc kubenswrapper[4756]: E0224 00:07:22.832675 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:22 crc kubenswrapper[4756]: E0224 00:07:22.832813 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.840993 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 00:57:08.762697322 +0000 UTC Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.848892 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.866195 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.885728 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.897399 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905604 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-etc-kubernetes\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905653 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-cnibin\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905694 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-script-lib\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905715 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-k8s-cni-cncf-io\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905732 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-cni-bin\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905751 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-multus-certs\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905769 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-netd\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905786 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-cni-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905802 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-var-lib-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905819 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-ovn\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905837 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-cnibin\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905856 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-os-release\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905874 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-netns\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905891 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905907 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8cxg\" (UniqueName: \"kubernetes.io/projected/275f42a1-754e-4b35-8960-050748cbf355-kube-api-access-m8cxg\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905928 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-daemon-config\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905948 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz4cq\" (UniqueName: \"kubernetes.io/projected/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-kube-api-access-jz4cq\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905969 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-systemd-units\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.905993 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-etc-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906014 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906033 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvfww\" (UniqueName: \"kubernetes.io/projected/29053aeb-7913-4b4d-94dd-503af8e1415f-kube-api-access-lvfww\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906051 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-conf-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906091 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-node-log\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906109 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-cni-multus\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906129 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-kubelet\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906146 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-kubelet\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906164 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/275f42a1-754e-4b35-8960-050748cbf355-cni-binary-copy\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906180 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-system-cni-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906197 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-os-release\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906215 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-cni-binary-copy\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906234 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/275f42a1-754e-4b35-8960-050748cbf355-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906251 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-bin\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906268 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906287 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-config\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906304 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-hostroot\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906320 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-systemd\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906343 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-log-socket\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906358 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29053aeb-7913-4b4d-94dd-503af8e1415f-ovn-node-metrics-cert\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906374 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-netns\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906397 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906414 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-system-cni-dir\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906432 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-slash\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906452 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-env-overrides\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.906469 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-socket-dir-parent\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.909584 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.909621 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.909632 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.909653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.909670 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:22Z","lastTransitionTime":"2026-02-24T00:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.919255 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.938299 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.964360 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:22 crc kubenswrapper[4756]: I0224 00:07:22.984178 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.000641 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:22Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008093 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-netns\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008187 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008243 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8cxg\" (UniqueName: \"kubernetes.io/projected/275f42a1-754e-4b35-8960-050748cbf355-kube-api-access-m8cxg\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008295 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz4cq\" (UniqueName: \"kubernetes.io/projected/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-kube-api-access-jz4cq\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008344 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-daemon-config\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008370 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-netns\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008430 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-etc-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008488 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008569 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvfww\" (UniqueName: \"kubernetes.io/projected/29053aeb-7913-4b4d-94dd-503af8e1415f-kube-api-access-lvfww\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008614 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-conf-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008684 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-systemd-units\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008728 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-kubelet\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008781 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-node-log\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008927 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-systemd-units\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008981 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-conf-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.009026 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-kubelet\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.009089 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-node-log\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.009153 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-etc-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.009278 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-ovn-kubernetes\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.008827 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-tuning-conf-dir\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.009892 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-daemon-config\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.010981 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-cni-multus\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.011121 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-kubelet\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.011172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/275f42a1-754e-4b35-8960-050748cbf355-cni-binary-copy\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.011221 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/275f42a1-754e-4b35-8960-050748cbf355-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.011276 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-system-cni-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.011326 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-os-release\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.011373 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-cni-binary-copy\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.011420 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-hostroot\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012055 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-systemd\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012225 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-bin\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012718 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/275f42a1-754e-4b35-8960-050748cbf355-cni-binary-copy\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012768 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012802 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-cni-multus\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-config\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012870 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29053aeb-7913-4b4d-94dd-503af8e1415f-ovn-node-metrics-cert\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012917 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-netns\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012994 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-log-socket\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013059 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013108 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013136 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013140 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-system-cni-dir\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013191 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-slash\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013209 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013238 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-env-overrides\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013298 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-socket-dir-parent\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013362 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-etc-kubernetes\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013406 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-cnibin\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013455 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-script-lib\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013501 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-k8s-cni-cncf-io\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013545 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-cni-bin\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013555 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-system-cni-dir\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013596 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-multus-certs\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013643 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-cni-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013693 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-netd\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013737 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-os-release\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013789 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-system-cni-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013795 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-var-lib-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013849 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-os-release\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013847 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-ovn\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013902 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-cnibin\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013915 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-ovn\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014027 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014454 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-cni-binary-copy\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014496 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-hostroot\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014523 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-systemd\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014546 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-bin\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014576 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-cnibin\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014605 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-netns\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014676 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-slash\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.014927 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-config\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.015005 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-log-socket\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.015107 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.012879 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-kubelet\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.015191 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-multus-certs\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.015456 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-socket-dir-parent\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.015503 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-etc-kubernetes\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.015580 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-cnibin\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.015827 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-env-overrides\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.016184 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-multus-cni-dir\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.016270 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-netd\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.016410 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-script-lib\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.016481 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-run-k8s-cni-cncf-io\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.016535 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-var-lib-openvswitch\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.013745 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/275f42a1-754e-4b35-8960-050748cbf355-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.016563 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-host-var-lib-cni-bin\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.016979 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/275f42a1-754e-4b35-8960-050748cbf355-os-release\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.019144 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29053aeb-7913-4b4d-94dd-503af8e1415f-ovn-node-metrics-cert\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.023235 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.028397 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz4cq\" (UniqueName: \"kubernetes.io/projected/9de3ae24-6a68-4d42-bb86-f3d22a6b651a-kube-api-access-jz4cq\") pod \"multus-5xm6s\" (UID: \"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\") " pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.031059 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8cxg\" (UniqueName: \"kubernetes.io/projected/275f42a1-754e-4b35-8960-050748cbf355-kube-api-access-m8cxg\") pod \"multus-additional-cni-plugins-f8vwm\" (UID: \"275f42a1-754e-4b35-8960-050748cbf355\") " pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.031955 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvfww\" (UniqueName: \"kubernetes.io/projected/29053aeb-7913-4b4d-94dd-503af8e1415f-kube-api-access-lvfww\") pod \"ovnkube-node-dvdfz\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.040345 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.058340 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.070590 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.082584 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.094161 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.108199 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.116515 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.116566 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.116577 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.116598 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.116612 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.121782 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.136883 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.178356 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:23 crc kubenswrapper[4756]: W0224 00:07:23.197338 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29053aeb_7913_4b4d_94dd_503af8e1415f.slice/crio-4691a61cd508cb5a3eb752efbf7555f5e60213b560da783a0f0faf6fe06b16ce WatchSource:0}: Error finding container 4691a61cd508cb5a3eb752efbf7555f5e60213b560da783a0f0faf6fe06b16ce: Status 404 returned error can't find the container with id 4691a61cd508cb5a3eb752efbf7555f5e60213b560da783a0f0faf6fe06b16ce Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.203590 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5xm6s" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.210824 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.222982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.223017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.223028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.223049 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.223082 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: W0224 00:07:23.242987 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod275f42a1_754e_4b35_8960_050748cbf355.slice/crio-d315072a3bd1b848e3ddb8394f1e7f99d77ff6b15307b237941e3b9c0ae120ae WatchSource:0}: Error finding container d315072a3bd1b848e3ddb8394f1e7f99d77ff6b15307b237941e3b9c0ae120ae: Status 404 returned error can't find the container with id d315072a3bd1b848e3ddb8394f1e7f99d77ff6b15307b237941e3b9c0ae120ae Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.246990 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-64mrj" event={"ID":"b884f6c3-73b0-42a3-b301-c56e0043cd70","Type":"ContainerStarted","Data":"f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.247094 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-64mrj" event={"ID":"b884f6c3-73b0-42a3-b301-c56e0043cd70","Type":"ContainerStarted","Data":"61cfee81b2fd004b45c738548d4b24f8c77845609877521a4602533f9107a554"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.253827 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerStarted","Data":"352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.253871 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerStarted","Data":"1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.253885 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerStarted","Data":"9bb14ea1c64075b29b090516a022161e33cf971ed3dfe880bfe6e592f15fe2ca"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.255241 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xm6s" event={"ID":"9de3ae24-6a68-4d42-bb86-f3d22a6b651a","Type":"ContainerStarted","Data":"2d2bd388a47695f14ff7bf8c081b33f627037de01b69353b5ca11d944ed9d9ac"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.256785 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"4691a61cd508cb5a3eb752efbf7555f5e60213b560da783a0f0faf6fe06b16ce"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.263042 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.285107 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.328914 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.328999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.329179 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.329246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.329284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.329310 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.343319 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.357629 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.369547 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.383705 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.401612 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.419276 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.432200 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.432258 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.432270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.432295 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.432333 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.435436 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.449718 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.471743 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.486447 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.501944 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.516826 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.533181 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.535284 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.535338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.535351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.535372 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.535385 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.548499 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.562492 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.577863 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.593147 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.605692 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.616663 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.634905 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.638500 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.638551 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.638568 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.638589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.638602 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.649474 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.670427 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.692027 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.706366 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.723870 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.738239 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.740953 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.740993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.741006 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.741027 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.741040 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.776350 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.817290 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.842098 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:44:51.283820289 +0000 UTC Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.844615 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.844659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.844668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.844685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.844696 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.853221 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.889758 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.931647 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.946512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.946545 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.946554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.946573 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.946583 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:23Z","lastTransitionTime":"2026-02-24T00:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:23 crc kubenswrapper[4756]: I0224 00:07:23.973788 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:23Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.014452 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.048639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.048669 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.048677 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.048693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.048703 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.060942 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.092385 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.132837 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.152078 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.152115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.152129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.152149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.152161 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.175664 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.219963 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.250910 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.255278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.255315 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.255327 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.255349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.255361 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.261697 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb" exitCode=0 Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.261804 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.264804 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xm6s" event={"ID":"9de3ae24-6a68-4d42-bb86-f3d22a6b651a","Type":"ContainerStarted","Data":"45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.267017 4756 generic.go:334] "Generic (PLEG): container finished" podID="275f42a1-754e-4b35-8960-050748cbf355" containerID="aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944" exitCode=0 Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.267090 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" event={"ID":"275f42a1-754e-4b35-8960-050748cbf355","Type":"ContainerDied","Data":"aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.267129 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" event={"ID":"275f42a1-754e-4b35-8960-050748cbf355","Type":"ContainerStarted","Data":"d315072a3bd1b848e3ddb8394f1e7f99d77ff6b15307b237941e3b9c0ae120ae"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.297697 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.333281 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.358044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.358126 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.358144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.358167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.358183 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.380238 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.413713 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.490983 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.493814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.493858 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.493870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.493889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.493901 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.507665 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.533251 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.574799 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.596657 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.596704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.596714 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.596734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.596745 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.619028 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.652918 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.693623 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.699464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.699499 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.699513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.699533 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.699548 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.736828 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.737022 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.737150 4756 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.737208 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:28.737191755 +0000 UTC m=+165.648054398 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.737417 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:28.737408991 +0000 UTC m=+165.648271624 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.748273 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.779141 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.801949 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.802001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.802012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.802029 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.802042 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.816851 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.833024 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.833180 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.833339 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.833358 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.833484 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.833591 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.837464 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.837514 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.837551 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837635 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837655 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837667 4756 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837686 4756 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837706 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:28.83769158 +0000 UTC m=+165.748554213 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837728 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:28.837719141 +0000 UTC m=+165.748581774 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837811 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837825 4756 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837851 4756 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:24 crc kubenswrapper[4756]: E0224 00:07:24.837883 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:28.837870485 +0000 UTC m=+165.748733118 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.842330 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:18:22.27112713 +0000 UTC Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.854486 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.896011 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.904153 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.904205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.904218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.904239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.904251 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:24Z","lastTransitionTime":"2026-02-24T00:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.934409 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:24 crc kubenswrapper[4756]: I0224 00:07:24.979877 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:24Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.007426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.007470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.007494 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.007510 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.007521 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.012675 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.055581 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.090720 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.110513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.110610 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.110644 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.110678 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.110702 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.154616 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.214142 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.214210 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.214221 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.214245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.214259 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.275391 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.275466 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.275487 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.275510 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.275530 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.275551 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.277813 4756 generic.go:334] "Generic (PLEG): container finished" podID="275f42a1-754e-4b35-8960-050748cbf355" containerID="a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559" exitCode=0 Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.277898 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" event={"ID":"275f42a1-754e-4b35-8960-050748cbf355","Type":"ContainerDied","Data":"a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.320784 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.322854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.322919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.322936 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.322961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.322980 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.343895 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.365693 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.387127 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.410224 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.426947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.427003 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.427025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.427052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.427112 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.430683 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.449225 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.464531 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.503623 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.529474 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.529516 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.529525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.529544 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.529556 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.538419 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.576697 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.621715 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.634514 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.634565 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.634578 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.634600 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.634616 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.656637 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.714466 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.737581 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.737630 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.737643 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.737665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.737679 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.738995 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.771324 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-r75b8"] Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.771828 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.776701 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.783545 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.804490 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.823978 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.840175 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.840241 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.840255 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.840278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.840290 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.842522 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.842522 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:55:39.321300686 +0000 UTC Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.848848 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c0e52828-39b2-4e41-87a0-9c1b983b5f56-serviceca\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.848906 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0e52828-39b2-4e41-87a0-9c1b983b5f56-host\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.848945 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsw4b\" (UniqueName: \"kubernetes.io/projected/c0e52828-39b2-4e41-87a0-9c1b983b5f56-kube-api-access-dsw4b\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.893994 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.932326 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.944190 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.944233 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.944247 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.944272 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.944287 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:25Z","lastTransitionTime":"2026-02-24T00:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.950514 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsw4b\" (UniqueName: \"kubernetes.io/projected/c0e52828-39b2-4e41-87a0-9c1b983b5f56-kube-api-access-dsw4b\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.950571 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c0e52828-39b2-4e41-87a0-9c1b983b5f56-serviceca\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.950617 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0e52828-39b2-4e41-87a0-9c1b983b5f56-host\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.950697 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0e52828-39b2-4e41-87a0-9c1b983b5f56-host\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.951678 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c0e52828-39b2-4e41-87a0-9c1b983b5f56-serviceca\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:25 crc kubenswrapper[4756]: I0224 00:07:25.972152 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:25Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.001258 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsw4b\" (UniqueName: \"kubernetes.io/projected/c0e52828-39b2-4e41-87a0-9c1b983b5f56-kube-api-access-dsw4b\") pod \"node-ca-r75b8\" (UID: \"c0e52828-39b2-4e41-87a0-9c1b983b5f56\") " pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.032245 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.047870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.047921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.047937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.047961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.047975 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.077108 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.083575 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r75b8" Feb 24 00:07:26 crc kubenswrapper[4756]: W0224 00:07:26.096382 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0e52828_39b2_4e41_87a0_9c1b983b5f56.slice/crio-0c256eb1cc9fa70bd1a91b77abd5c2e115778cc4dca5e56e57f30c765b419792 WatchSource:0}: Error finding container 0c256eb1cc9fa70bd1a91b77abd5c2e115778cc4dca5e56e57f30c765b419792: Status 404 returned error can't find the container with id 0c256eb1cc9fa70bd1a91b77abd5c2e115778cc4dca5e56e57f30c765b419792 Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.111465 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.150643 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.150682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.150695 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.150715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.150912 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.151499 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.190564 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.234577 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.253620 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.253688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.253701 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.253725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.253739 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.270012 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.289411 4756 generic.go:334] "Generic (PLEG): container finished" podID="275f42a1-754e-4b35-8960-050748cbf355" containerID="865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b" exitCode=0 Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.289472 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" event={"ID":"275f42a1-754e-4b35-8960-050748cbf355","Type":"ContainerDied","Data":"865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.291380 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r75b8" event={"ID":"c0e52828-39b2-4e41-87a0-9c1b983b5f56","Type":"ContainerStarted","Data":"0c256eb1cc9fa70bd1a91b77abd5c2e115778cc4dca5e56e57f30c765b419792"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.313436 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.358780 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.358829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.358840 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.358856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.358867 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.361145 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.395228 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.431921 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.461097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.461153 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.461168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.461189 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.461205 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.471545 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.509307 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.550940 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.564466 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.564625 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.564871 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.565115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.565340 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.589139 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.646226 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.667967 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.668312 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.668421 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.668504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.668587 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.673983 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.685984 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.686011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.686021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.686039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.686050 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.700786 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.706790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.706917 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.707012 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.707178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.707268 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.710321 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.719482 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.724659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.724708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.724720 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.724739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.724756 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.742100 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.746187 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.746218 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.746229 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.746246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.746256 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.752039 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.756952 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.760852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.760882 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.760890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.760908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.760920 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.771698 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.772033 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.773853 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.773965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.774025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.774113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.774176 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.791003 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.832020 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.832327 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.832356 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.832375 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.832478 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.832545 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:26 crc kubenswrapper[4756]: E0224 00:07:26.832601 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.843208 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:18:19.447209135 +0000 UTC Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.871567 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.877139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.877184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.877198 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.877224 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.877239 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.920114 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.959881 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.980137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.980182 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.980195 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.980213 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:26 crc kubenswrapper[4756]: I0224 00:07:26.980225 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:26Z","lastTransitionTime":"2026-02-24T00:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:26.999968 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:26Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.046573 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.076238 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.083322 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.083373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.083385 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.083406 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.083421 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.113603 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.156492 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.190921 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.190972 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.190983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.191001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.191013 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.199161 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.233639 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.299505 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.299653 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.299681 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.299730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.299750 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.307622 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.308945 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r75b8" event={"ID":"c0e52828-39b2-4e41-87a0-9c1b983b5f56","Type":"ContainerStarted","Data":"8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.312721 4756 generic.go:334] "Generic (PLEG): container finished" podID="275f42a1-754e-4b35-8960-050748cbf355" containerID="70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c" exitCode=0 Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.312761 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" event={"ID":"275f42a1-754e-4b35-8960-050748cbf355","Type":"ContainerDied","Data":"70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.327224 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.348847 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.362004 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.394982 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.406806 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.406841 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.406850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.406864 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.406874 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.434780 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.474770 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.513442 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.513476 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.513486 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.513502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.513512 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.520932 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.555860 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.598378 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.633207 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.633773 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.633784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.633800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.633811 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.643708 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.675317 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.713544 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.736834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.736870 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.736880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.736898 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.736909 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.754495 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.801378 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.838617 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.840635 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.840698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.840709 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.840727 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.840743 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.844380 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:50:33.185577042 +0000 UTC Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.877269 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.931856 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.946338 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.946405 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.946423 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.946449 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.946469 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:27Z","lastTransitionTime":"2026-02-24T00:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:27 crc kubenswrapper[4756]: I0224 00:07:27.953709 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:27Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.003727 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.035951 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.049998 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.050056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.050099 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.050134 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.050166 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.079354 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.114013 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.153580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.153642 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.153656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.153680 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.153695 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.155038 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.213412 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.254957 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.257038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.257140 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.257168 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.257199 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.257224 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.277598 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.315880 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.332279 4756 generic.go:334] "Generic (PLEG): container finished" podID="275f42a1-754e-4b35-8960-050748cbf355" containerID="cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4" exitCode=0 Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.332348 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" event={"ID":"275f42a1-754e-4b35-8960-050748cbf355","Type":"ContainerDied","Data":"cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.355650 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.361713 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.361751 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.361763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.361782 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.361795 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.397366 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.432171 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.464934 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.464999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.465021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.465053 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.465100 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.472852 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.513227 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.553572 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.567889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.567943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.567955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.567979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.567994 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.597462 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.633311 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.671937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.671999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.672018 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.672050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.672104 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.685304 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.719170 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.753439 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.774999 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.775109 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.775133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.775162 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.775181 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.800595 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.832271 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:28 crc kubenswrapper[4756]: E0224 00:07:28.832458 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.832240 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.832844 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:28 crc kubenswrapper[4756]: E0224 00:07:28.833164 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:28 crc kubenswrapper[4756]: E0224 00:07:28.833254 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.834408 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.845225 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 17:33:28.993027573 +0000 UTC Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.875129 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.877937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.878002 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.878023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.878054 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.878093 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.914776 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.963177 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.982314 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.982356 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.982367 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.982386 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.982399 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:28Z","lastTransitionTime":"2026-02-24T00:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:28 crc kubenswrapper[4756]: I0224 00:07:28.994770 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:28Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.035651 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.081918 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.084987 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.085035 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.085045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.085090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.085105 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.116665 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.166874 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.188329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.188820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.188834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.188857 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.188870 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.199503 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.237989 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.276841 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.292222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.292291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.292305 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.292330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.292346 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.350398 4756 generic.go:334] "Generic (PLEG): container finished" podID="275f42a1-754e-4b35-8960-050748cbf355" containerID="46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7" exitCode=0 Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.350507 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" event={"ID":"275f42a1-754e-4b35-8960-050748cbf355","Type":"ContainerDied","Data":"46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.401918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.401979 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.401995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.402016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.402031 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.404265 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.421416 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.443050 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.459455 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.473319 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.505348 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.505396 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.505408 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.505427 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.505440 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.519050 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.551528 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.595460 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.608619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.608665 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.608682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.608708 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.608724 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.635760 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.676488 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.711595 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.711655 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.711666 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.711685 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.711696 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.730057 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.753898 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.794749 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.814912 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.814972 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.814991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.815015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.815034 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.833411 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.846358 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:28:46.965961872 +0000 UTC Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.876184 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.910432 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.917923 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.917978 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.917991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.918016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.918031 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:29Z","lastTransitionTime":"2026-02-24T00:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:29 crc kubenswrapper[4756]: I0224 00:07:29.952762 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:29Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.020260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.020311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.020334 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.020356 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.020369 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.123397 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.124674 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.124698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.124725 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.124743 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.227804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.227887 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.227908 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.227940 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.227962 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.332142 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.332214 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.332237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.332269 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.332289 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.377645 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.378304 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.378400 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.378415 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.386700 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" event={"ID":"275f42a1-754e-4b35-8960-050748cbf355","Type":"ContainerStarted","Data":"23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.403567 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.418849 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.428967 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.435608 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.435641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.435654 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.435672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.435688 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.437836 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.458097 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.476101 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.491229 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.517471 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.536600 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.538827 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.538875 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.538884 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.538903 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.538913 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.554972 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.576120 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.594719 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.615443 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.636410 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.641820 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.641860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.641872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.641893 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.641906 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.656462 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.674005 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.685422 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.712927 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.733359 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.744995 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.745028 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.745246 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.745288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.745302 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.750787 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.770615 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.786057 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.807627 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.832573 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.832616 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.832716 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.832769 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:30 crc kubenswrapper[4756]: E0224 00:07:30.833011 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:30 crc kubenswrapper[4756]: E0224 00:07:30.833211 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:30 crc kubenswrapper[4756]: E0224 00:07:30.833319 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.846943 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 22:06:01.98221117 +0000 UTC Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.848862 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.848890 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.848899 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.848915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.848928 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.871957 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.925768 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.951964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.952004 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.952016 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.952034 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.952047 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:30Z","lastTransitionTime":"2026-02-24T00:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.958021 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:30 crc kubenswrapper[4756]: I0224 00:07:30.999617 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:30Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.037840 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.055013 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.055129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.055151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.055188 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.055226 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.073720 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.115621 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.161455 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.161521 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.161543 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.161572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.161591 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.162183 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.195305 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.236033 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.264017 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.264114 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.264144 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.264178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.264201 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.277957 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.328941 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:31Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.366943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.367011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.367024 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.367043 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.367056 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.470250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.470282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.470292 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.470309 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.470319 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.573509 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.573571 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.573582 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.573605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.573620 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.678151 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.678210 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.678222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.678242 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.678253 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.781242 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.781600 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.781672 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.781741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.781815 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.847434 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 15:43:56.917402088 +0000 UTC Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.885746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.885800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.885811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.885832 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.885851 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.989610 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.990039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.990270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.990439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:31 crc kubenswrapper[4756]: I0224 00:07:31.990575 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:31Z","lastTransitionTime":"2026-02-24T00:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.093550 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.093624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.093650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.093683 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.093701 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.196863 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.196937 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.196955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.196983 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.197002 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.300861 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.300933 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.300951 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.300981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.300999 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.402509 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovnkube-controller/0.log" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.403436 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.403539 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.403560 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.403590 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.403612 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.406861 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b" exitCode=1 Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.406953 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.408140 4756 scope.go:117] "RemoveContainer" containerID="a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.438317 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.461061 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.480762 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.501305 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.507042 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.507128 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.507148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.507178 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.507202 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.521428 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.540906 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.563609 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.584295 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.610216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.610271 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.610286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.610308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.610319 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.624987 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.650488 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.667145 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.682905 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.696809 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.711740 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.712859 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.712898 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.712911 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.712932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.712947 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.730996 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.746093 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.772343 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:32.298529 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 00:07:32.298564 6476 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 00:07:32.298587 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:32.298620 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:32.298632 6476 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:32.298674 6476 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:32.298684 6476 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0224 00:07:32.298687 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:32.298719 6476 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:32.298785 6476 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:32.298766 6476 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 00:07:32.298833 6476 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:32.298862 6476 factory.go:656] Stopping watch factory\\\\nI0224 00:07:32.298885 6476 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:32.298925 6476 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:32Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.816009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.816050 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.816075 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.816094 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.816108 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.832482 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.832540 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.832592 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:32 crc kubenswrapper[4756]: E0224 00:07:32.832664 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:32 crc kubenswrapper[4756]: E0224 00:07:32.832740 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:32 crc kubenswrapper[4756]: E0224 00:07:32.832996 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.849093 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:48:03.831350748 +0000 UTC Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.919417 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.919479 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.919491 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.919512 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:32 crc kubenswrapper[4756]: I0224 00:07:32.919525 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:32Z","lastTransitionTime":"2026-02-24T00:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.022009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.022045 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.022056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.022090 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.022102 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.125167 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.125216 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.125227 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.125245 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.125255 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.227736 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.228289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.228303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.228323 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.228334 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.331588 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.331664 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.331686 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.331715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.331733 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.412305 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovnkube-controller/1.log" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.412918 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovnkube-controller/0.log" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.416360 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" exitCode=1 Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.416417 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.416479 4756 scope.go:117] "RemoveContainer" containerID="a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.417355 4756 scope.go:117] "RemoveContainer" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:07:33 crc kubenswrapper[4756]: E0224 00:07:33.417645 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dvdfz_openshift-ovn-kubernetes(29053aeb-7913-4b4d-94dd-503af8e1415f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.433269 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.435188 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.435281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.435308 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.435344 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.435369 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.457367 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.477016 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.492229 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.504545 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.516353 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.532262 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.538640 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.538679 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.538691 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.538712 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.538725 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.546772 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.562947 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.578594 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.591408 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.613005 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:32.298529 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 00:07:32.298564 6476 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 00:07:32.298587 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:32.298620 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:32.298632 6476 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:32.298674 6476 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:32.298684 6476 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0224 00:07:32.298687 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:32.298719 6476 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:32.298785 6476 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:32.298766 6476 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 00:07:32.298833 6476 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:32.298862 6476 factory.go:656] Stopping watch factory\\\\nI0224 00:07:32.298885 6476 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:32.298925 6476 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\" for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0224 00:07:33.323887 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h\\\\nI0224 00:07:33.323900 6606 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qb88h in node crc\\\\nI0224 00:07:33.323904 6606 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0224 00:07:33.323914 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h after 0 failed attempt(s)\\\\nF0224 00:07:33.323925 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-iden\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.627797 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.641054 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.641110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.641121 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.641139 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.641152 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.648022 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.663826 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.682964 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.696447 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.743956 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.744023 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.744044 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.744122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.744139 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.846837 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.846929 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.846948 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.846976 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.846993 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.850110 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:08:01.088425281 +0000 UTC Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.851676 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.871525 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.892580 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.909488 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.937749 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a786c626133dbf308a3a0b597735d627ff5fbcea2f9c8ae7804af443157f175b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:32Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0224 00:07:32.298529 6476 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0224 00:07:32.298564 6476 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0224 00:07:32.298587 6476 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0224 00:07:32.298620 6476 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0224 00:07:32.298632 6476 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0224 00:07:32.298674 6476 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0224 00:07:32.298684 6476 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0224 00:07:32.298687 6476 handler.go:208] Removed *v1.Node event handler 2\\\\nI0224 00:07:32.298719 6476 handler.go:208] Removed *v1.Node event handler 7\\\\nI0224 00:07:32.298785 6476 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0224 00:07:32.298766 6476 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0224 00:07:32.298833 6476 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0224 00:07:32.298862 6476 factory.go:656] Stopping watch factory\\\\nI0224 00:07:32.298885 6476 ovnkube.go:599] Stopped ovnkube\\\\nI0224 00:07:32.298925 6476 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0224 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\" for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0224 00:07:33.323887 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h\\\\nI0224 00:07:33.323900 6606 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qb88h in node crc\\\\nI0224 00:07:33.323904 6606 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0224 00:07:33.323914 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h after 0 failed attempt(s)\\\\nF0224 00:07:33.323925 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-iden\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.956641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.956711 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.956729 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.956755 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.956774 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:33Z","lastTransitionTime":"2026-02-24T00:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.962454 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:33 crc kubenswrapper[4756]: I0224 00:07:33.983670 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.002382 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:33Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.021692 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.039880 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.059790 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.059854 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.059880 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.059786 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.059913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.059940 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.077907 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.092379 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.119639 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.134408 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.152010 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.162822 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.162863 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.162874 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.162892 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.162903 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.168628 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.265824 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.265900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.265919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.265980 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.266005 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.368741 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.368804 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.368817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.368842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.368856 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.423239 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovnkube-controller/1.log" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.428563 4756 scope.go:117] "RemoveContainer" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:07:34 crc kubenswrapper[4756]: E0224 00:07:34.428769 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dvdfz_openshift-ovn-kubernetes(29053aeb-7913-4b4d-94dd-503af8e1415f)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.442911 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.473675 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.473726 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.473739 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.473761 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.473778 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.474561 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\" for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0224 00:07:33.323887 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h\\\\nI0224 00:07:33.323900 6606 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qb88h in node crc\\\\nI0224 00:07:33.323904 6606 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0224 00:07:33.323914 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h after 0 failed attempt(s)\\\\nF0224 00:07:33.323925 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-iden\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dvdfz_openshift-ovn-kubernetes(29053aeb-7913-4b4d-94dd-503af8e1415f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.494760 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.512930 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.532704 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.552866 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.571030 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.576799 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.576876 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.576895 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.576925 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.576947 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.590899 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.622293 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.636675 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.656087 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.676277 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.679675 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.679760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.679817 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.679850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.679872 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.688055 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.701783 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.714297 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.782464 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.782527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.782539 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.782566 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.782580 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.794966 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.816249 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:34Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.832601 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:34 crc kubenswrapper[4756]: E0224 00:07:34.832811 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.833229 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:34 crc kubenswrapper[4756]: E0224 00:07:34.833386 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.833635 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:34 crc kubenswrapper[4756]: E0224 00:07:34.833739 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.850687 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 23:57:07.611999596 +0000 UTC Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.886402 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.886463 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.886478 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.886504 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.886521 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.994968 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.995033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.995047 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.995101 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:34 crc kubenswrapper[4756]: I0224 00:07:34.995116 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:34Z","lastTransitionTime":"2026-02-24T00:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.098674 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.098742 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.098759 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.098785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.098803 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.202138 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.202205 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.202222 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.202250 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.202272 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.253985 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l"] Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.254557 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.258027 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.258489 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.276920 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.295745 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.305525 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.305580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.305601 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.305633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.305654 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.320726 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.338389 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.359389 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.367126 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b443814-16b7-4597-9f5d-a42b27c8dd6e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.367261 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b443814-16b7-4597-9f5d-a42b27c8dd6e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.367305 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b443814-16b7-4597-9f5d-a42b27c8dd6e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.367348 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg22p\" (UniqueName: \"kubernetes.io/projected/4b443814-16b7-4597-9f5d-a42b27c8dd6e-kube-api-access-fg22p\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.393725 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.408490 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.408547 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.408563 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.408589 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.408608 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.413984 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.431972 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.447855 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.461104 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.467977 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b443814-16b7-4597-9f5d-a42b27c8dd6e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.468030 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b443814-16b7-4597-9f5d-a42b27c8dd6e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.468056 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg22p\" (UniqueName: \"kubernetes.io/projected/4b443814-16b7-4597-9f5d-a42b27c8dd6e-kube-api-access-fg22p\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.468108 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b443814-16b7-4597-9f5d-a42b27c8dd6e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.469292 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4b443814-16b7-4597-9f5d-a42b27c8dd6e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.469546 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4b443814-16b7-4597-9f5d-a42b27c8dd6e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.476205 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4b443814-16b7-4597-9f5d-a42b27c8dd6e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.484243 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.490614 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg22p\" (UniqueName: \"kubernetes.io/projected/4b443814-16b7-4597-9f5d-a42b27c8dd6e-kube-api-access-fg22p\") pod \"ovnkube-control-plane-749d76644c-xpc4l\" (UID: \"4b443814-16b7-4597-9f5d-a42b27c8dd6e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.495160 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.505414 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b443814-16b7-4597-9f5d-a42b27c8dd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg22p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg22p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xpc4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.511614 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.511641 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.511650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.511668 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.511681 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.525327 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.559438 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.593326 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.595504 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.614221 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.614257 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.614270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.614289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.614304 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: W0224 00:07:35.616698 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b443814_16b7_4597_9f5d_a42b27c8dd6e.slice/crio-4db3d05452c07e0edcad966d8c9f696ce51c9c752e091c7e6f28d98e5d485a48 WatchSource:0}: Error finding container 4db3d05452c07e0edcad966d8c9f696ce51c9c752e091c7e6f28d98e5d485a48: Status 404 returned error can't find the container with id 4db3d05452c07e0edcad966d8c9f696ce51c9c752e091c7e6f28d98e5d485a48 Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.619716 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\" for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0224 00:07:33.323887 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h\\\\nI0224 00:07:33.323900 6606 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qb88h in node crc\\\\nI0224 00:07:33.323904 6606 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0224 00:07:33.323914 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h after 0 failed attempt(s)\\\\nF0224 00:07:33.323925 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-iden\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dvdfz_openshift-ovn-kubernetes(29053aeb-7913-4b4d-94dd-503af8e1415f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.635713 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:35Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.717431 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.717495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.717508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.717527 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.717540 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.821281 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.821325 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.821340 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.821364 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.821379 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.851642 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 05:04:00.695138159 +0000 UTC Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.924133 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.924187 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.924201 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.924225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:35 crc kubenswrapper[4756]: I0224 00:07:35.924239 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:35Z","lastTransitionTime":"2026-02-24T00:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.027203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.027262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.027277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.027302 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.027320 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.129946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.129981 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.129990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.130007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.130019 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.232740 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.232784 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.232793 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.232811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.232821 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.336254 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.336290 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.336301 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.336329 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.336344 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.437586 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" event={"ID":"4b443814-16b7-4597-9f5d-a42b27c8dd6e","Type":"ContainerStarted","Data":"aaaaa5665d28e819feee79f132b7cb1776dcb1fcbfffdf3d510544c61a4565a7"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.437646 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" event={"ID":"4b443814-16b7-4597-9f5d-a42b27c8dd6e","Type":"ContainerStarted","Data":"7143162dddaf42e2a66682d98b85c7a1798a222b7ca34f9b8d6556db754b64ee"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.437659 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" event={"ID":"4b443814-16b7-4597-9f5d-a42b27c8dd6e","Type":"ContainerStarted","Data":"4db3d05452c07e0edcad966d8c9f696ce51c9c752e091c7e6f28d98e5d485a48"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.438508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.438704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.438850 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.439009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.439261 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.453578 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b443814-16b7-4597-9f5d-a42b27c8dd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7143162dddaf42e2a66682d98b85c7a1798a222b7ca34f9b8d6556db754b64ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg22p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaaaa5665d28e819feee79f132b7cb1776dcb1fcbfffdf3d510544c61a4565a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg22p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xpc4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.483541 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\" for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0224 00:07:33.323887 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h\\\\nI0224 00:07:33.323900 6606 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qb88h in node crc\\\\nI0224 00:07:33.323904 6606 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0224 00:07:33.323914 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h after 0 failed attempt(s)\\\\nF0224 00:07:33.323925 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-iden\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dvdfz_openshift-ovn-kubernetes(29053aeb-7913-4b4d-94dd-503af8e1415f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.499562 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.513669 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.527560 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.542258 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.542290 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.542303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.542324 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.542338 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.546952 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.563029 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.582270 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.603833 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.621427 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.643280 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.644867 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.644906 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.644922 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.644946 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.644962 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.662919 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.676178 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.696317 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.708251 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.736039 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.747495 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.747726 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.747809 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.747913 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.747989 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.751731 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.768827 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.809395 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.809433 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.809448 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.809473 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.809491 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.826644 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.832282 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.832306 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.832300 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.832400 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.832450 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.832317 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.832512 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.832529 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.832584 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.832616 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.832725 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.850577 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.852742 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:06:09.606513634 +0000 UTC Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.855597 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.855646 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.855659 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.855682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.855699 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.880755 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.885849 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.886060 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.886148 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.886219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.886287 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.902654 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.906764 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.906812 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.906829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.906851 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.906868 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.921699 4756 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"75eca80e-f61e-4b17-a785-e3e58909daf6\\\",\\\"systemUUID\\\":\\\"47ce3691-1d61-4b4a-a860-22c7e0dded9b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:36Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:36 crc kubenswrapper[4756]: E0224 00:07:36.921929 4756 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.923696 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.923758 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.923778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.923810 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:36 crc kubenswrapper[4756]: I0224 00:07:36.923828 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:36Z","lastTransitionTime":"2026-02-24T00:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.026508 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.026927 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.027113 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.027303 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.027428 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.132335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.132381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.132395 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.132422 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.132439 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.136731 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jlcw6"] Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.137616 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:37 crc kubenswrapper[4756]: E0224 00:07:37.137753 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.157060 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b443814-16b7-4597-9f5d-a42b27c8dd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7143162dddaf42e2a66682d98b85c7a1798a222b7ca34f9b8d6556db754b64ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg22p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaaaa5665d28e819feee79f132b7cb1776dcb1fcbfffdf3d510544c61a4565a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg22p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xpc4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.171904 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.188690 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.191953 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.192144 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmk5t\" (UniqueName: \"kubernetes.io/projected/1508259c-4154-46a3-a390-2200d44b9524-kube-api-access-xmk5t\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.204891 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.222220 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.236089 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.236277 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.236430 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.236555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.236668 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.251540 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\" for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0224 00:07:33.323887 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h\\\\nI0224 00:07:33.323900 6606 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qb88h in node crc\\\\nI0224 00:07:33.323904 6606 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0224 00:07:33.323914 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h after 0 failed attempt(s)\\\\nF0224 00:07:33.323925 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-iden\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dvdfz_openshift-ovn-kubernetes(29053aeb-7913-4b4d-94dd-503af8e1415f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.268398 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jlcw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1508259c-4154-46a3-a390-2200d44b9524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmk5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmk5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jlcw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.287924 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.293595 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk5t\" (UniqueName: \"kubernetes.io/projected/1508259c-4154-46a3-a390-2200d44b9524-kube-api-access-xmk5t\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.293683 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:37 crc kubenswrapper[4756]: E0224 00:07:37.293929 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:37 crc kubenswrapper[4756]: E0224 00:07:37.294027 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs podName:1508259c-4154-46a3-a390-2200d44b9524 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:37.794001168 +0000 UTC m=+114.704863841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs") pod "network-metrics-daemon-jlcw6" (UID: "1508259c-4154-46a3-a390-2200d44b9524") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.303193 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.319923 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.328493 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmk5t\" (UniqueName: \"kubernetes.io/projected/1508259c-4154-46a3-a390-2200d44b9524-kube-api-access-xmk5t\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.338708 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.339619 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.339733 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.339760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.339798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.339822 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.351435 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.362788 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.385039 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.402591 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.417597 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.432399 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.441715 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.441760 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.441776 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.441800 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.441821 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.446392 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.464355 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:37Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.546155 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.546235 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.546252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.546286 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.546302 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.648889 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.648977 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.648990 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.649009 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.649025 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.753163 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.753232 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.753252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.753279 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.753305 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.802606 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:37 crc kubenswrapper[4756]: E0224 00:07:37.803038 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:37 crc kubenswrapper[4756]: E0224 00:07:37.803253 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs podName:1508259c-4154-46a3-a390-2200d44b9524 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:38.803221682 +0000 UTC m=+115.714084355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs") pod "network-metrics-daemon-jlcw6" (UID: "1508259c-4154-46a3-a390-2200d44b9524") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.853351 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:40:51.79584935 +0000 UTC Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.857217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.857275 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.857288 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.857311 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.857331 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.961781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.961860 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.961883 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.961918 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:37 crc kubenswrapper[4756]: I0224 00:07:37.961939 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:37Z","lastTransitionTime":"2026-02-24T00:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.067471 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.067555 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.067575 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.067606 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.067627 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.171856 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.171945 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.171964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.171996 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.172020 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.283394 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.283574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.283645 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.283682 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.283748 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.388184 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.388262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.388278 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.388307 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.388327 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.491959 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.492021 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.492039 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.492092 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.492111 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.595513 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.595567 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.595580 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.595602 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.595614 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.699095 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.699165 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.699189 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.699225 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.699253 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.803188 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.803236 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.803249 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.803268 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.803280 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.815293 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:38 crc kubenswrapper[4756]: E0224 00:07:38.815491 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:38 crc kubenswrapper[4756]: E0224 00:07:38.815572 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs podName:1508259c-4154-46a3-a390-2200d44b9524 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:40.81555021 +0000 UTC m=+117.726412833 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs") pod "network-metrics-daemon-jlcw6" (UID: "1508259c-4154-46a3-a390-2200d44b9524") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.832392 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.832474 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.832490 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:38 crc kubenswrapper[4756]: E0224 00:07:38.832559 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.832624 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:38 crc kubenswrapper[4756]: E0224 00:07:38.832660 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:38 crc kubenswrapper[4756]: E0224 00:07:38.832763 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:38 crc kubenswrapper[4756]: E0224 00:07:38.833011 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.854381 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:36:26.599383023 +0000 UTC Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.906058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.906166 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.906189 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.906219 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:38 crc kubenswrapper[4756]: I0224 00:07:38.906241 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:38Z","lastTransitionTime":"2026-02-24T00:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.010671 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.010735 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.010747 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.010766 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.010779 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.113865 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.113932 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.113955 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.113982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.114001 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.217834 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.218378 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.218399 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.218426 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.218445 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.321687 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.321734 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.321745 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.321765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.321778 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.425007 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.425097 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.425110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.425130 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.425146 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.527970 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.528038 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.528085 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.528122 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.528148 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.631716 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.631793 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.631813 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.631842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.631861 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.735226 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.735280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.735291 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.735320 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.735333 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.838252 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.838313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.838330 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.838351 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.838373 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.855439 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 17:24:15.176463152 +0000 UTC Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.940928 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.940993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.941011 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.941036 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:39 crc kubenswrapper[4756]: I0224 00:07:39.941054 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:39Z","lastTransitionTime":"2026-02-24T00:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.043639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.043704 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.043726 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.043754 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.043774 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.146502 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.146552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.146561 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.146585 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.146609 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.250025 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.250129 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.250161 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.250197 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.250220 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.353919 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.353976 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.353993 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.354015 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.354028 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.456650 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.456724 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.456746 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.456778 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.456798 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.560688 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.560765 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.560781 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.560829 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.560852 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.664656 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.664737 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.664763 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.664796 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.664819 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.768328 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.768412 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.768441 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.768470 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.768487 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.832501 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.832569 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.832569 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.832611 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:40 crc kubenswrapper[4756]: E0224 00:07:40.832724 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:40 crc kubenswrapper[4756]: E0224 00:07:40.832890 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:40 crc kubenswrapper[4756]: E0224 00:07:40.833003 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:40 crc kubenswrapper[4756]: E0224 00:07:40.833182 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.842003 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:40 crc kubenswrapper[4756]: E0224 00:07:40.842190 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:40 crc kubenswrapper[4756]: E0224 00:07:40.842314 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs podName:1508259c-4154-46a3-a390-2200d44b9524 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:44.842286789 +0000 UTC m=+121.753149452 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs") pod "network-metrics-daemon-jlcw6" (UID: "1508259c-4154-46a3-a390-2200d44b9524") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.856464 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 03:16:25.02189251 +0000 UTC Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.872480 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.872552 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.872574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.872605 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.872627 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.977217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.977294 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.977313 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.977343 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:40 crc kubenswrapper[4756]: I0224 00:07:40.977361 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:40Z","lastTransitionTime":"2026-02-24T00:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.080852 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.080900 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.080910 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.080928 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.080940 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.184958 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.185037 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.185058 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.185120 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.185143 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.288032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.288087 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.288098 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.288115 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.288140 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.391873 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.391926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.391943 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.391965 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.391978 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.494825 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.494947 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.494961 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.494982 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.494995 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.597915 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.598001 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.598019 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.598048 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.598097 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.701203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.701262 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.701275 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.701300 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.701317 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.804574 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.804624 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.804639 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.804660 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.804678 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.857107 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 20:30:44.674414494 +0000 UTC Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.907180 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.907538 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.907554 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.907572 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:41 crc kubenswrapper[4756]: I0224 00:07:41.907585 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:41Z","lastTransitionTime":"2026-02-24T00:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.011010 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.011487 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.011643 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.011786 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.011972 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.115964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.116040 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.116056 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.116110 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.116131 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.219698 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.219764 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.219785 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.219814 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.219836 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.324280 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.324358 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.324381 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.324416 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.324436 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.428239 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.428321 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.428342 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.428370 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.428390 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.531558 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.531633 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.531658 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.531693 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.531716 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.635389 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.635439 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.635456 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.635482 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.635500 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.738991 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.739146 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.739173 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.739203 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.739222 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.833165 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.833165 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:42 crc kubenswrapper[4756]: E0224 00:07:42.833474 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.833236 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.833235 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:42 crc kubenswrapper[4756]: E0224 00:07:42.833568 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:42 crc kubenswrapper[4756]: E0224 00:07:42.833655 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:42 crc kubenswrapper[4756]: E0224 00:07:42.833920 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.843373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.843432 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.843458 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.843489 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.843512 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.858246 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:22:53.627961397 +0000 UTC Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.947811 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.947894 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.947911 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.947962 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:42 crc kubenswrapper[4756]: I0224 00:07:42.947981 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:42Z","lastTransitionTime":"2026-02-24T00:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.051270 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.051349 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.051373 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.051404 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.051426 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:43Z","lastTransitionTime":"2026-02-24T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.155260 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.155335 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.155354 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.155383 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.155403 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:43Z","lastTransitionTime":"2026-02-24T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.258032 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.258123 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.258137 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.258158 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.258171 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:43Z","lastTransitionTime":"2026-02-24T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.360730 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.360798 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.360816 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.360842 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.360861 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:43Z","lastTransitionTime":"2026-02-24T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.462872 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.462926 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.462941 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.462964 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.462979 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:43Z","lastTransitionTime":"2026-02-24T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.566149 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.566217 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.566237 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.566267 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.566288 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:43Z","lastTransitionTime":"2026-02-24T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.669949 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.670033 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.670052 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.670124 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.670144 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:43Z","lastTransitionTime":"2026-02-24T00:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:43 crc kubenswrapper[4756]: E0224 00:07:43.770987 4756 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.858434 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:48:06.262306564 +0000 UTC Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.860560 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0812398e-0b6a-4b1e-96a5-82b22de24397\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b3b4761d8f9e3684bd408774f48a3020e73ef4c70a730b24c6ff84690071d26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://388f71e83e78e525bde8b9d702ca3b166cf02bdc0c96b261e40b5e919ef959c7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:13Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0224 00:05:45.974905 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0224 00:05:45.977806 1 observer_polling.go:159] Starting file observer\\\\nI0224 00:05:46.004712 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0224 00:05:46.007758 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0224 00:06:12.995645 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0224 00:06:12.995851 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:12Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3149e955b8af436223fb79ff630ba79ab57b580d5623ea1fb023332b9d328c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8cf69a3c41412f9a23db45ded443b5665e83ed7b51f9c0e0b025023fb63cbb7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.882560 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f34460389eebf3ecfd746d12032aa3314aad5267c587441b2b91f0c9536d9fee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.899507 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"071714b1-b44e-4085-adf5-0ed6b6e64af3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://352dceafb1841739204f3eb7e976b4703e8397d043d318fa2001fa959ccde0d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxn24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qb88h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.924931 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"275f42a1-754e-4b35-8960-050748cbf355\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://23d2d3ea83264b1b8b25717e2e9b5ad6ac4895137a79418bb22752834989b0fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aad59bf1e14883d9fbb7829521cd9194996c1895339358f009ae49a29fabf944\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5f8f8581a10eb49989fbe0e7ded5bc64e42a936db4641fd8774ad4a3dd22559\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://865123482bc20141a68d773085948650dfed1535b6c708ade3e91753339ef34b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70082613242e0834bf217f926af18513148cf270138ee171ab7984a030c3e36c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb5c9f87c9c802f4aee80a1248d22b9f6dabab66d656e56a95e84bf4154178e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46cf485d38ac5c6dd3ae32617c74429fc859d4000f25b4f7aa73913913051dd7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8cxg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-f8vwm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.939872 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-r75b8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0e52828-39b2-4e41-87a0-9c1b983b5f56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e9cf1ca01dc51be55d6c01909decaef0f300db8bae2200ded7b83759aa3accc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsw4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:25Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-r75b8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.957899 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f918684-707c-4fd4-9397-aa913a3849e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4208b8af1510cbec984103634b044cd1d2e8a5d0c936fe3fb1622f36977b35ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e03104f7e17ad36838ae2e0faadf540a2a55f930289028a8aa7ebcca183fc587\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:43 crc kubenswrapper[4756]: E0224 00:07:43.960342 4756 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:07:43 crc kubenswrapper[4756]: I0224 00:07:43.994794 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c22e243-1bfb-4802-9918-ad27d3360862\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ecfffd5368e6fea7b788387ddcdcdb60cf3d33a842062e8f678d729fb85d9fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://362580c6c6f1c12e1808e16da084421c325a1723a9b4d0cf53340bf1a06daa3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8377d60ace782768d864cabe78e4435e4f4f5522f55c685d212d76cd98d3352b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c35ec04374cdc2a78a114efa82029d17ec5058fce3741d3f14cd3a465c12b21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://03fc69e86ef13b25928e493e660ba70a6e8972a8f8f91532ab357f5d792cecfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46d6476bb5dc04189788bad5c4564c618c1b878df9b51f3c0bedd8ce33355245\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://977e111026279bd24e1f1d4e8f76ebc8d4d52ed6c9b95b908b4be8fcc020462f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d31bf26f7b97773999b1b80e1d4b9f6036f03a40f3ba1818c4be4e1d99dcf5af\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:43Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.013979 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12dc3a1f-81b2-4003-bbcd-b3664c283fcd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-24T00:06:18Z\\\",\\\"message\\\":\\\"W0224 00:06:17.153238 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0224 00:06:17.154364 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771891577 cert, and key in /tmp/serving-cert-1801803813/serving-signer.crt, /tmp/serving-cert-1801803813/serving-signer.key\\\\nI0224 00:06:17.738343 1 observer_polling.go:159] Starting file observer\\\\nW0224 00:06:17.744525 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:17Z is after 2026-02-23T05:33:16Z\\\\nI0224 00:06:17.744702 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0224 00:06:17.747960 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1801803813/tls.crt::/tmp/serving-cert-1801803813/tls.key\\\\\\\"\\\\nF0224 00:06:18.098981 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:06:18Z is after 2026-02-23T05:33:16Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:06:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.033444 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.049811 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://305753dcb5e3905948edb2c90cafce051a90359991feef75de1bdc3a86270e0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57c0cd119da1fd294f68352bac1a9495c1afc454a3f95f761a671c3dd3509dd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.065878 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-64mrj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b884f6c3-73b0-42a3-b301-c56e0043cd70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f23ce2aead3fbfe86302e5491c9bc2a2786f8ad18792301b5893d35394dce015\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zks4n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-64mrj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.085550 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5xm6s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9de3ae24-6a68-4d42-bb86-f3d22a6b651a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jz4cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5xm6s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.104580 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4b443814-16b7-4597-9f5d-a42b27c8dd6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7143162dddaf42e2a66682d98b85c7a1798a222b7ca34f9b8d6556db754b64ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg22p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaaaa5665d28e819feee79f132b7cb1776dcb1fcbfffdf3d510544c61a4565a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fg22p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xpc4l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.125879 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"668fd676-f7f4-4fe9-8b55-d13c5b49a762\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:05:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2beda7e1242a9a6aa573f3be4833dcabe8222301993b21c4e4e492e6e7739beb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d996a80e5679a516681bfb2c95c29a2a9956edbf9412cb8e1196f9fd76b78af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a9c746ac46bc65048a5196e9b7427cfe699f1e644c593537e5a95f006804e1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:05:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5a9252b140b548a0f2fa7d6e81123809441bc641900892038f29ee8d332e8d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:05:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:05:44Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:05:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.150899 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.169615 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:20Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.187600 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-24T00:06:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a59834b8518080769d8f161c6f5e94e42106823127f081d2609a10637b96871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:06:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.222245 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29053aeb-7913-4b4d-94dd-503af8e1415f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:22Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-24T00:07:33Z\\\",\\\"message\\\":\\\" for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0224 00:07:33.323887 6606 obj_retry.go:365] Adding new object: *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h\\\\nI0224 00:07:33.323900 6606 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-qb88h in node crc\\\\nI0224 00:07:33.323904 6606 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0224 00:07:33.323914 6606 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-qb88h after 0 failed attempt(s)\\\\nF0224 00:07:33.323925 6606 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-iden\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:32Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dvdfz_openshift-ovn-kubernetes(29053aeb-7913-4b4d-94dd-503af8e1415f)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-24T00:07:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-24T00:07:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-24T00:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lvfww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dvdfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.239454 4756 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jlcw6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1508259c-4154-46a3-a390-2200d44b9524\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-24T00:07:37Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmk5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xmk5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-24T00:07:37Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jlcw6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-24T00:07:44Z is after 2025-08-24T17:21:41Z" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.832655 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.832717 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.832655 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:44 crc kubenswrapper[4756]: E0224 00:07:44.832868 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:44 crc kubenswrapper[4756]: E0224 00:07:44.833013 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:44 crc kubenswrapper[4756]: E0224 00:07:44.833198 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.833386 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:44 crc kubenswrapper[4756]: E0224 00:07:44.833698 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.858965 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:50:10.066732731 +0000 UTC Feb 24 00:07:44 crc kubenswrapper[4756]: I0224 00:07:44.898874 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:44 crc kubenswrapper[4756]: E0224 00:07:44.899243 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:44 crc kubenswrapper[4756]: E0224 00:07:44.899364 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs podName:1508259c-4154-46a3-a390-2200d44b9524 nodeName:}" failed. No retries permitted until 2026-02-24 00:07:52.899332809 +0000 UTC m=+129.810195492 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs") pod "network-metrics-daemon-jlcw6" (UID: "1508259c-4154-46a3-a390-2200d44b9524") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:45 crc kubenswrapper[4756]: I0224 00:07:45.859160 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:50:01.638713165 +0000 UTC Feb 24 00:07:46 crc kubenswrapper[4756]: I0224 00:07:46.833010 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:46 crc kubenswrapper[4756]: I0224 00:07:46.833012 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:46 crc kubenswrapper[4756]: I0224 00:07:46.833158 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:46 crc kubenswrapper[4756]: I0224 00:07:46.833159 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:46 crc kubenswrapper[4756]: E0224 00:07:46.833836 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:46 crc kubenswrapper[4756]: E0224 00:07:46.833994 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:46 crc kubenswrapper[4756]: E0224 00:07:46.834104 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:46 crc kubenswrapper[4756]: E0224 00:07:46.834201 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:46 crc kubenswrapper[4756]: I0224 00:07:46.859377 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:47:34.292681532 +0000 UTC Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.087753 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.088170 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.088289 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.088384 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.088550 4756 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-24T00:07:47Z","lastTransitionTime":"2026-02-24T00:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.161356 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s"] Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.162015 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.166133 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.166133 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.166271 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.167056 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.200917 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=33.200882887 podStartE2EDuration="33.200882887s" podCreationTimestamp="2026-02-24 00:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.199950248 +0000 UTC m=+124.110812931" watchObservedRunningTime="2026-02-24 00:07:47.200882887 +0000 UTC m=+124.111745560" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.227724 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4de29d9e-d532-4bbc-9a6f-1aba9e297318-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.227801 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4de29d9e-d532-4bbc-9a6f-1aba9e297318-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.227839 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de29d9e-d532-4bbc-9a6f-1aba9e297318-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.227868 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4de29d9e-d532-4bbc-9a6f-1aba9e297318-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.228066 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4de29d9e-d532-4bbc-9a6f-1aba9e297318-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.231362 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.231336101 podStartE2EDuration="1m26.231336101s" podCreationTimestamp="2026-02-24 00:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.230843386 +0000 UTC m=+124.141706059" watchObservedRunningTime="2026-02-24 00:07:47.231336101 +0000 UTC m=+124.142198754" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.301773 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-64mrj" podStartSLOduration=64.301739683 podStartE2EDuration="1m4.301739683s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.281897398 +0000 UTC m=+124.192760091" watchObservedRunningTime="2026-02-24 00:07:47.301739683 +0000 UTC m=+124.212602346" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.319941 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5xm6s" podStartSLOduration=63.319910696 podStartE2EDuration="1m3.319910696s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.302776315 +0000 UTC m=+124.213639008" watchObservedRunningTime="2026-02-24 00:07:47.319910696 +0000 UTC m=+124.230773359" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.329612 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4de29d9e-d532-4bbc-9a6f-1aba9e297318-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.329920 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4de29d9e-d532-4bbc-9a6f-1aba9e297318-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.330138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de29d9e-d532-4bbc-9a6f-1aba9e297318-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.330333 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4de29d9e-d532-4bbc-9a6f-1aba9e297318-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.330468 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4de29d9e-d532-4bbc-9a6f-1aba9e297318-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.329746 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4de29d9e-d532-4bbc-9a6f-1aba9e297318-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.330744 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4de29d9e-d532-4bbc-9a6f-1aba9e297318-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.330806 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4de29d9e-d532-4bbc-9a6f-1aba9e297318-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.338443 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.3384223 podStartE2EDuration="31.3384223s" podCreationTimestamp="2026-02-24 00:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.337467971 +0000 UTC m=+124.248330604" watchObservedRunningTime="2026-02-24 00:07:47.3384223 +0000 UTC m=+124.249284973" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.340015 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4de29d9e-d532-4bbc-9a6f-1aba9e297318-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.358197 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4de29d9e-d532-4bbc-9a6f-1aba9e297318-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wbr7s\" (UID: \"4de29d9e-d532-4bbc-9a6f-1aba9e297318\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.362948 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xpc4l" podStartSLOduration=63.36292604 podStartE2EDuration="1m3.36292604s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.362431324 +0000 UTC m=+124.273293957" watchObservedRunningTime="2026-02-24 00:07:47.36292604 +0000 UTC m=+124.273788693" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.468524 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=67.468500002 podStartE2EDuration="1m7.468500002s" podCreationTimestamp="2026-02-24 00:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.451269148 +0000 UTC m=+124.362131791" watchObservedRunningTime="2026-02-24 00:07:47.468500002 +0000 UTC m=+124.379362625" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.484109 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" Feb 24 00:07:47 crc kubenswrapper[4756]: W0224 00:07:47.498664 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4de29d9e_d532_4bbc_9a6f_1aba9e297318.slice/crio-d5774b21489f6b9b88f9d8be9bec3f920df6b3c6d853122bb7806ab2ea5f708e WatchSource:0}: Error finding container d5774b21489f6b9b88f9d8be9bec3f920df6b3c6d853122bb7806ab2ea5f708e: Status 404 returned error can't find the container with id d5774b21489f6b9b88f9d8be9bec3f920df6b3c6d853122bb7806ab2ea5f708e Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.512683 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-f8vwm" podStartSLOduration=63.512660681 podStartE2EDuration="1m3.512660681s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.511544406 +0000 UTC m=+124.422407059" watchObservedRunningTime="2026-02-24 00:07:47.512660681 +0000 UTC m=+124.423523314" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.513718 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podStartSLOduration=64.513710863 podStartE2EDuration="1m4.513710863s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.482858587 +0000 UTC m=+124.393721230" watchObservedRunningTime="2026-02-24 00:07:47.513710863 +0000 UTC m=+124.424573496" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.524640 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-r75b8" podStartSLOduration=64.524605881 podStartE2EDuration="1m4.524605881s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.524370034 +0000 UTC m=+124.435232657" watchObservedRunningTime="2026-02-24 00:07:47.524605881 +0000 UTC m=+124.435468534" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.540921 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.540898846 podStartE2EDuration="1m8.540898846s" podCreationTimestamp="2026-02-24 00:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:47.540508734 +0000 UTC m=+124.451371357" watchObservedRunningTime="2026-02-24 00:07:47.540898846 +0000 UTC m=+124.451761479" Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.860278 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:21:08.331078936 +0000 UTC Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.860770 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 24 00:07:47 crc kubenswrapper[4756]: I0224 00:07:47.868984 4756 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 24 00:07:48 crc kubenswrapper[4756]: I0224 00:07:48.482504 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" event={"ID":"4de29d9e-d532-4bbc-9a6f-1aba9e297318","Type":"ContainerStarted","Data":"7017809127a6b56bd7d6b389dc3cf2b742b1cab87eea75619f4cfd9211351787"} Feb 24 00:07:48 crc kubenswrapper[4756]: I0224 00:07:48.482563 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" event={"ID":"4de29d9e-d532-4bbc-9a6f-1aba9e297318","Type":"ContainerStarted","Data":"d5774b21489f6b9b88f9d8be9bec3f920df6b3c6d853122bb7806ab2ea5f708e"} Feb 24 00:07:48 crc kubenswrapper[4756]: I0224 00:07:48.503754 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wbr7s" podStartSLOduration=65.503721769 podStartE2EDuration="1m5.503721769s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:48.502680676 +0000 UTC m=+125.413543349" watchObservedRunningTime="2026-02-24 00:07:48.503721769 +0000 UTC m=+125.414584462" Feb 24 00:07:48 crc kubenswrapper[4756]: I0224 00:07:48.832345 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:48 crc kubenswrapper[4756]: I0224 00:07:48.832546 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:48 crc kubenswrapper[4756]: I0224 00:07:48.832644 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:48 crc kubenswrapper[4756]: I0224 00:07:48.832516 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:48 crc kubenswrapper[4756]: E0224 00:07:48.832768 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:48 crc kubenswrapper[4756]: E0224 00:07:48.833010 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:48 crc kubenswrapper[4756]: E0224 00:07:48.833739 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:48 crc kubenswrapper[4756]: E0224 00:07:48.834016 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:48 crc kubenswrapper[4756]: I0224 00:07:48.834358 4756 scope.go:117] "RemoveContainer" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:07:48 crc kubenswrapper[4756]: E0224 00:07:48.961892 4756 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:07:49 crc kubenswrapper[4756]: I0224 00:07:49.488967 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovnkube-controller/1.log" Feb 24 00:07:49 crc kubenswrapper[4756]: I0224 00:07:49.491805 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerStarted","Data":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} Feb 24 00:07:49 crc kubenswrapper[4756]: I0224 00:07:49.492414 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:49 crc kubenswrapper[4756]: I0224 00:07:49.527222 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podStartSLOduration=65.527195552 podStartE2EDuration="1m5.527195552s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:07:49.523782066 +0000 UTC m=+126.434644729" watchObservedRunningTime="2026-02-24 00:07:49.527195552 +0000 UTC m=+126.438058205" Feb 24 00:07:49 crc kubenswrapper[4756]: I0224 00:07:49.870786 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jlcw6"] Feb 24 00:07:49 crc kubenswrapper[4756]: I0224 00:07:49.870967 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:49 crc kubenswrapper[4756]: E0224 00:07:49.871106 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:50 crc kubenswrapper[4756]: I0224 00:07:50.833250 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:50 crc kubenswrapper[4756]: I0224 00:07:50.833347 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:50 crc kubenswrapper[4756]: E0224 00:07:50.833774 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:50 crc kubenswrapper[4756]: E0224 00:07:50.833873 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:50 crc kubenswrapper[4756]: I0224 00:07:50.834151 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:50 crc kubenswrapper[4756]: E0224 00:07:50.834231 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:51 crc kubenswrapper[4756]: I0224 00:07:51.832931 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:51 crc kubenswrapper[4756]: E0224 00:07:51.833212 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:52 crc kubenswrapper[4756]: I0224 00:07:52.832898 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:52 crc kubenswrapper[4756]: I0224 00:07:52.832899 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:52 crc kubenswrapper[4756]: E0224 00:07:52.833110 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 24 00:07:52 crc kubenswrapper[4756]: E0224 00:07:52.833489 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 24 00:07:52 crc kubenswrapper[4756]: I0224 00:07:52.834134 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:52 crc kubenswrapper[4756]: E0224 00:07:52.834431 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 24 00:07:52 crc kubenswrapper[4756]: I0224 00:07:52.998434 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:52 crc kubenswrapper[4756]: E0224 00:07:52.998774 4756 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:52 crc kubenswrapper[4756]: E0224 00:07:52.998911 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs podName:1508259c-4154-46a3-a390-2200d44b9524 nodeName:}" failed. No retries permitted until 2026-02-24 00:08:08.998877398 +0000 UTC m=+145.909740061 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs") pod "network-metrics-daemon-jlcw6" (UID: "1508259c-4154-46a3-a390-2200d44b9524") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 24 00:07:53 crc kubenswrapper[4756]: I0224 00:07:53.206990 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:07:53 crc kubenswrapper[4756]: I0224 00:07:53.832853 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:53 crc kubenswrapper[4756]: E0224 00:07:53.835102 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jlcw6" podUID="1508259c-4154-46a3-a390-2200d44b9524" Feb 24 00:07:54 crc kubenswrapper[4756]: I0224 00:07:54.832887 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:07:54 crc kubenswrapper[4756]: I0224 00:07:54.832903 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:07:54 crc kubenswrapper[4756]: I0224 00:07:54.832905 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:07:54 crc kubenswrapper[4756]: I0224 00:07:54.836315 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 00:07:54 crc kubenswrapper[4756]: I0224 00:07:54.836471 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 00:07:54 crc kubenswrapper[4756]: I0224 00:07:54.836577 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 00:07:54 crc kubenswrapper[4756]: I0224 00:07:54.840764 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 00:07:55 crc kubenswrapper[4756]: I0224 00:07:55.832679 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:07:55 crc kubenswrapper[4756]: I0224 00:07:55.834946 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 00:07:55 crc kubenswrapper[4756]: I0224 00:07:55.835530 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.282398 4756 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.334567 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tx4dr"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.335909 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.337761 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.338297 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.338961 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.339127 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.339515 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.339586 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.339930 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.340234 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.341943 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.342528 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.344320 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29531520-pdn56"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.344952 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.347700 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.347778 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.348248 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.349557 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4jtsc"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.350265 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.350556 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.352313 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.352437 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-24rzj"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.352441 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.352994 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-24rzj" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.355030 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360293 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360293 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360345 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360300 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360312 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360651 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360657 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360308 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360891 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.360972 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.361983 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-988lm"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.363203 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.361036 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.362403 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.362409 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.362478 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.362511 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.362760 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.362861 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.362856 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364110 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364274 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364343 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364621 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364859 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364969 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.365102 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.365268 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364638 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364693 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.364741 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.370880 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.379347 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhtw6"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.380256 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.381961 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.382046 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5qkbl"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.393708 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.394306 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xrpb6"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.394454 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.394474 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.395211 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dm5nw"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.395822 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.395961 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f6x7l"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.396004 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.396843 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.396899 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.397605 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.398761 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.399736 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.404001 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.405954 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.407189 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.407588 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.408140 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-689hw"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.408656 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.408843 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.408998 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.409727 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.409915 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.410512 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.419025 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.419351 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.419492 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.419614 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.419755 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.419881 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.420546 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.420688 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.420823 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.420959 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.421016 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.421331 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.421341 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.421460 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.421346 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.421683 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.421789 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.421971 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.422016 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.422176 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.422209 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.422335 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.422381 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.422338 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.423298 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.424299 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.424551 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.424690 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.424717 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.424807 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.424922 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.425107 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.425315 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.425392 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.425394 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.425589 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.425873 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.426110 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.426244 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.426398 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.426416 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.426559 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427366 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427386 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427427 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427527 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427584 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427765 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427807 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427881 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.427962 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.428030 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.429271 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.441335 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.443506 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.444941 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.455452 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.456044 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.456867 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.460244 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.460926 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.464936 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b1b8164-4f03-4067-b349-265636839558-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.465091 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-audit\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.465128 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mjbp\" (UniqueName: \"kubernetes.io/projected/8083f569-75ed-42a3-aee0-3590a86f4329-kube-api-access-8mjbp\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466155 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fv8q\" (UniqueName: \"kubernetes.io/projected/3b1b8164-4f03-4067-b349-265636839558-kube-api-access-5fv8q\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466205 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5de838f-c611-4eea-8e8c-51016e473942-auth-proxy-config\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466224 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-oauth-serving-cert\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466270 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hz7p\" (UniqueName: \"kubernetes.io/projected/735c2ab3-4d91-4906-81f6-77224425b729-kube-api-access-2hz7p\") pod \"downloads-7954f5f757-24rzj\" (UID: \"735c2ab3-4d91-4906-81f6-77224425b729\") " pod="openshift-console/downloads-7954f5f757-24rzj" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466289 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/68e02cbc-c03c-45cd-916a-16dd3b0052cd-serviceca\") pod \"image-pruner-29531520-pdn56\" (UID: \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\") " pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466308 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c5de838f-c611-4eea-8e8c-51016e473942-machine-approver-tls\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466328 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b17dc2ee-4e0e-4c28-a410-0ac418884f44-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466348 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b1b8164-4f03-4067-b349-265636839558-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466368 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9r8b\" (UniqueName: \"kubernetes.io/projected/344d0ffc-25ff-4503-a029-129a7e178a11-kube-api-access-t9r8b\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466386 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-etcd-client\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466403 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8083f569-75ed-42a3-aee0-3590a86f4329-audit-dir\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466429 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-client-ca\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466465 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-config\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466496 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b17dc2ee-4e0e-4c28-a410-0ac418884f44-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466515 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466546 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slw5r\" (UniqueName: \"kubernetes.io/projected/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-kube-api-access-slw5r\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466564 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wlxw\" (UniqueName: \"kubernetes.io/projected/68e02cbc-c03c-45cd-916a-16dd3b0052cd-kube-api-access-7wlxw\") pod \"image-pruner-29531520-pdn56\" (UID: \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\") " pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466584 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85t4f\" (UniqueName: \"kubernetes.io/projected/b17dc2ee-4e0e-4c28-a410-0ac418884f44-kube-api-access-85t4f\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466626 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-service-ca\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466649 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f79308a5-8560-4d7d-9180-92f05193a4ce-console-oauth-config\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466671 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466691 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-trusted-ca-bundle\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466716 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-image-import-ca\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466732 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-serving-cert\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466749 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsgbq\" (UniqueName: \"kubernetes.io/projected/f79308a5-8560-4d7d-9180-92f05193a4ce-kube-api-access-vsgbq\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466774 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5de838f-c611-4eea-8e8c-51016e473942-config\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466792 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-serving-cert\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466808 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f79308a5-8560-4d7d-9180-92f05193a4ce-console-serving-cert\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466828 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8083f569-75ed-42a3-aee0-3590a86f4329-node-pullsecrets\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466844 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-encryption-config\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.466881 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-etcd-serving-ca\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.467697 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-console-config\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.467786 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqk8m\" (UniqueName: \"kubernetes.io/projected/c5de838f-c611-4eea-8e8c-51016e473942-kube-api-access-sqk8m\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.467814 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-config\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.467903 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b1b8164-4f03-4067-b349-265636839558-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.467932 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/344d0ffc-25ff-4503-a029-129a7e178a11-serving-cert\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.468480 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tznzq"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.468605 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.468961 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.471976 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7qp8v"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.472396 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.472740 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.473134 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.473581 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.473861 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.474311 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.476185 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.476726 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.478277 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.478742 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.480086 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.480805 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.482020 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29531520-pdn56"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.482607 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.483861 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.485135 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.486315 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.491735 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.498312 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.506033 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x4zw8"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.506753 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.507300 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.507727 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.507764 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.508059 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.508291 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.509474 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j7xwn"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.517052 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.520070 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.521724 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.533773 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-522ht"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.534052 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.536584 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.538487 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cvvhf"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.539179 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.541051 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.542915 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.545133 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-q8m47"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.551251 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.552156 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.558754 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4jtsc"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.560140 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.563533 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tx4dr"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.566155 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-d7npx"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.568106 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gdn5q"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.568425 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d7npx" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.570608 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.570954 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5qkbl"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573237 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b1b8164-4f03-4067-b349-265636839558-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573294 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/344d0ffc-25ff-4503-a029-129a7e178a11-serving-cert\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573318 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b1b8164-4f03-4067-b349-265636839558-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573343 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-audit\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573368 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ntc2\" (UniqueName: \"kubernetes.io/projected/7eb6916d-1721-48bf-b67b-00ccc0144871-kube-api-access-9ntc2\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573393 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mjbp\" (UniqueName: \"kubernetes.io/projected/8083f569-75ed-42a3-aee0-3590a86f4329-kube-api-access-8mjbp\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573411 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fv8q\" (UniqueName: \"kubernetes.io/projected/3b1b8164-4f03-4067-b349-265636839558-kube-api-access-5fv8q\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573429 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19c9286-752d-420e-83bb-010eefd59ea1-config-volume\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573446 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-serving-cert\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573472 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5de838f-c611-4eea-8e8c-51016e473942-auth-proxy-config\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573490 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-oauth-serving-cert\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573510 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hz7p\" (UniqueName: \"kubernetes.io/projected/735c2ab3-4d91-4906-81f6-77224425b729-kube-api-access-2hz7p\") pod \"downloads-7954f5f757-24rzj\" (UID: \"735c2ab3-4d91-4906-81f6-77224425b729\") " pod="openshift-console/downloads-7954f5f757-24rzj" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573528 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/68e02cbc-c03c-45cd-916a-16dd3b0052cd-serviceca\") pod \"image-pruner-29531520-pdn56\" (UID: \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\") " pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573545 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22582027-e646-48aa-a5f5-7fb50f199830-audit-dir\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573572 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c5de838f-c611-4eea-8e8c-51016e473942-machine-approver-tls\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573596 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b17dc2ee-4e0e-4c28-a410-0ac418884f44-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573614 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b1b8164-4f03-4067-b349-265636839558-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573633 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-etcd-client\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8083f569-75ed-42a3-aee0-3590a86f4329-audit-dir\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573673 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9r8b\" (UniqueName: \"kubernetes.io/projected/344d0ffc-25ff-4503-a029-129a7e178a11-kube-api-access-t9r8b\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573694 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-client-ca\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573712 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcg84\" (UniqueName: \"kubernetes.io/projected/549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc-kube-api-access-gcg84\") pod \"cluster-samples-operator-665b6dd947-jskfz\" (UID: \"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573742 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dscrg\" (UniqueName: \"kubernetes.io/projected/22582027-e646-48aa-a5f5-7fb50f199830-kube-api-access-dscrg\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573762 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-config\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573779 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-audit-policies\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573804 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19c9286-752d-420e-83bb-010eefd59ea1-secret-volume\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b17dc2ee-4e0e-4c28-a410-0ac418884f44-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573844 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573867 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573886 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-etcd-client\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573906 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slw5r\" (UniqueName: \"kubernetes.io/projected/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-kube-api-access-slw5r\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573929 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wlxw\" (UniqueName: \"kubernetes.io/projected/68e02cbc-c03c-45cd-916a-16dd3b0052cd-kube-api-access-7wlxw\") pod \"image-pruner-29531520-pdn56\" (UID: \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\") " pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573950 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85t4f\" (UniqueName: \"kubernetes.io/projected/b17dc2ee-4e0e-4c28-a410-0ac418884f44-kube-api-access-85t4f\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573967 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.573991 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-service-ca\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574011 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f79308a5-8560-4d7d-9180-92f05193a4ce-console-oauth-config\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574029 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-encryption-config\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574050 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574085 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-trusted-ca-bundle\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574106 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-image-import-ca\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574124 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-serving-cert\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574146 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsgbq\" (UniqueName: \"kubernetes.io/projected/f79308a5-8560-4d7d-9180-92f05193a4ce-kube-api-access-vsgbq\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574195 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5de838f-c611-4eea-8e8c-51016e473942-config\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574213 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fbhk\" (UniqueName: \"kubernetes.io/projected/f19c9286-752d-420e-83bb-010eefd59ea1-kube-api-access-5fbhk\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574234 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jskfz\" (UID: \"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574252 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-client-ca\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574268 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574286 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eb6916d-1721-48bf-b67b-00ccc0144871-serving-cert\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574306 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8083f569-75ed-42a3-aee0-3590a86f4329-node-pullsecrets\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574323 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-serving-cert\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574340 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f79308a5-8560-4d7d-9180-92f05193a4ce-console-serving-cert\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574369 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-encryption-config\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574386 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-etcd-serving-ca\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574402 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-config\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574420 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqk8m\" (UniqueName: \"kubernetes.io/projected/c5de838f-c611-4eea-8e8c-51016e473942-kube-api-access-sqk8m\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574437 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-config\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574454 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-console-config\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.574488 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-audit\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.575145 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c5de838f-c611-4eea-8e8c-51016e473942-auth-proxy-config\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.575366 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b17dc2ee-4e0e-4c28-a410-0ac418884f44-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.575431 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-console-config\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.575484 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-image-import-ca\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.576018 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-config\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.576349 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-oauth-serving-cert\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.577328 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b1b8164-4f03-4067-b349-265636839558-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.577630 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5de838f-c611-4eea-8e8c-51016e473942-config\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.577781 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7qp8v"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.577864 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.578235 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8083f569-75ed-42a3-aee0-3590a86f4329-audit-dir\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.579847 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-client-ca\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.580914 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-serving-cert\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.581382 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-689hw"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.581703 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3b1b8164-4f03-4067-b349-265636839558-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.582006 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-trusted-ca-bundle\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.582101 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-config\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.582116 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.582382 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b17dc2ee-4e0e-4c28-a410-0ac418884f44-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.582691 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-service-ca\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.582756 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8083f569-75ed-42a3-aee0-3590a86f4329-node-pullsecrets\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.583079 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.583529 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f79308a5-8560-4d7d-9180-92f05193a4ce-trusted-ca-bundle\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.583821 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/68e02cbc-c03c-45cd-916a-16dd3b0052cd-serviceca\") pod \"image-pruner-29531520-pdn56\" (UID: \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\") " pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.584256 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.584284 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-etcd-client\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.585333 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f6x7l"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.585439 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f79308a5-8560-4d7d-9180-92f05193a4ce-console-serving-cert\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.585485 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-serving-cert\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.586044 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8083f569-75ed-42a3-aee0-3590a86f4329-encryption-config\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.586492 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.587292 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/c5de838f-c611-4eea-8e8c-51016e473942-machine-approver-tls\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.588843 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f79308a5-8560-4d7d-9180-92f05193a4ce-console-oauth-config\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.589026 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.590323 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-24rzj"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.591787 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.592692 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d7npx"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.593155 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8083f569-75ed-42a3-aee0-3590a86f4329-etcd-serving-ca\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.594114 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dm5nw"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.595097 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.596219 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhtw6"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.597414 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/344d0ffc-25ff-4503-a029-129a7e178a11-serving-cert\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.597674 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.598594 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-988lm"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.599716 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.600812 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.601818 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2mxpf"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.602777 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.602889 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.604699 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qrnfg"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.605831 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xrpb6"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.606132 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qrnfg" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.607131 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-q8m47"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.608590 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.610444 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.611676 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.613467 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tznzq"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.615330 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.616224 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.617379 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x4zw8"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.618294 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.618683 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.620018 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.621186 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gdn5q"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.622200 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qrnfg"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.623216 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j7xwn"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.624207 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.625164 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cvvhf"] Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.638679 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.665629 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675372 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ntc2\" (UniqueName: \"kubernetes.io/projected/7eb6916d-1721-48bf-b67b-00ccc0144871-kube-api-access-9ntc2\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675412 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19c9286-752d-420e-83bb-010eefd59ea1-config-volume\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675438 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-serving-cert\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675468 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22582027-e646-48aa-a5f5-7fb50f199830-audit-dir\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675508 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcg84\" (UniqueName: \"kubernetes.io/projected/549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc-kube-api-access-gcg84\") pod \"cluster-samples-operator-665b6dd947-jskfz\" (UID: \"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675526 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dscrg\" (UniqueName: \"kubernetes.io/projected/22582027-e646-48aa-a5f5-7fb50f199830-kube-api-access-dscrg\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675572 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-audit-policies\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675594 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19c9286-752d-420e-83bb-010eefd59ea1-secret-volume\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675618 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675637 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-etcd-client\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675649 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/22582027-e646-48aa-a5f5-7fb50f199830-audit-dir\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675675 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675698 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-encryption-config\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675737 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fbhk\" (UniqueName: \"kubernetes.io/projected/f19c9286-752d-420e-83bb-010eefd59ea1-kube-api-access-5fbhk\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675764 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jskfz\" (UID: \"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675785 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-client-ca\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675813 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eb6916d-1721-48bf-b67b-00ccc0144871-serving-cert\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675834 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.675865 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-config\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.676698 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-audit-policies\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.677175 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.676880 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/22582027-e646-48aa-a5f5-7fb50f199830-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.677891 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-config\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.677969 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.678511 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-client-ca\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.678733 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-encryption-config\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.679688 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.679697 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-etcd-client\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.679967 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22582027-e646-48aa-a5f5-7fb50f199830-serving-cert\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.688303 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eb6916d-1721-48bf-b67b-00ccc0144871-serving-cert\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.692309 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jskfz\" (UID: \"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.698306 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.717939 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.737698 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.749202 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19c9286-752d-420e-83bb-010eefd59ea1-secret-volume\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.757875 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.777660 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.797765 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.818590 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.838795 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.859811 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.879033 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.899006 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.918001 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.927596 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19c9286-752d-420e-83bb-010eefd59ea1-config-volume\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.937529 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.959907 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.977397 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 00:07:57 crc kubenswrapper[4756]: I0224 00:07:57.998624 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.019732 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.038551 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.058186 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.079465 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.098881 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.119943 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.138592 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.157571 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.179205 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.200255 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.218497 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.239191 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.258608 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.278582 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.298797 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.318096 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.338669 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.378717 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.399937 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.418889 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.439905 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.457630 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.478501 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.496312 4756 request.go:700] Waited for 1.001734388s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.518668 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.538590 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.559193 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.578763 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.599163 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.618327 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.638056 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.658557 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.678345 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.698666 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.718533 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.738159 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.757621 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.777871 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.798523 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.817937 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.838494 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.859636 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.888418 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.907220 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.918643 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.938389 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.958311 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.977756 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 00:07:58 crc kubenswrapper[4756]: I0224 00:07:58.999001 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.018393 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.039011 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.058771 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.079202 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.098936 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.119294 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.140371 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.159661 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.179404 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.200251 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.219804 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.239850 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.259466 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.278718 4756 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.299274 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.348721 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fv8q\" (UniqueName: \"kubernetes.io/projected/3b1b8164-4f03-4067-b349-265636839558-kube-api-access-5fv8q\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.369105 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3b1b8164-4f03-4067-b349-265636839558-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vllpl\" (UID: \"3b1b8164-4f03-4067-b349-265636839558\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.388762 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mjbp\" (UniqueName: \"kubernetes.io/projected/8083f569-75ed-42a3-aee0-3590a86f4329-kube-api-access-8mjbp\") pod \"apiserver-76f77b778f-tx4dr\" (UID: \"8083f569-75ed-42a3-aee0-3590a86f4329\") " pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.400373 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hz7p\" (UniqueName: \"kubernetes.io/projected/735c2ab3-4d91-4906-81f6-77224425b729-kube-api-access-2hz7p\") pod \"downloads-7954f5f757-24rzj\" (UID: \"735c2ab3-4d91-4906-81f6-77224425b729\") " pod="openshift-console/downloads-7954f5f757-24rzj" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.426314 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsgbq\" (UniqueName: \"kubernetes.io/projected/f79308a5-8560-4d7d-9180-92f05193a4ce-kube-api-access-vsgbq\") pod \"console-f9d7485db-4jtsc\" (UID: \"f79308a5-8560-4d7d-9180-92f05193a4ce\") " pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.439230 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85t4f\" (UniqueName: \"kubernetes.io/projected/b17dc2ee-4e0e-4c28-a410-0ac418884f44-kube-api-access-85t4f\") pod \"openshift-apiserver-operator-796bbdcf4f-vv4vz\" (UID: \"b17dc2ee-4e0e-4c28-a410-0ac418884f44\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.452037 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slw5r\" (UniqueName: \"kubernetes.io/projected/e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2-kube-api-access-slw5r\") pod \"openshift-config-operator-7777fb866f-988lm\" (UID: \"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.465384 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.479300 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wlxw\" (UniqueName: \"kubernetes.io/projected/68e02cbc-c03c-45cd-916a-16dd3b0052cd-kube-api-access-7wlxw\") pod \"image-pruner-29531520-pdn56\" (UID: \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\") " pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.496745 4756 request.go:700] Waited for 1.917315724s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.496816 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.497809 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9r8b\" (UniqueName: \"kubernetes.io/projected/344d0ffc-25ff-4503-a029-129a7e178a11-kube-api-access-t9r8b\") pod \"route-controller-manager-6576b87f9c-zldwl\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.512382 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.513021 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqk8m\" (UniqueName: \"kubernetes.io/projected/c5de838f-c611-4eea-8e8c-51016e473942-kube-api-access-sqk8m\") pod \"machine-approver-56656f9798-6tzzt\" (UID: \"c5de838f-c611-4eea-8e8c-51016e473942\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.519574 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.527430 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.540253 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.560763 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.586152 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.586405 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.590608 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.599553 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.619242 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.619682 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.628570 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-24rzj" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.644560 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.665025 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ntc2\" (UniqueName: \"kubernetes.io/projected/7eb6916d-1721-48bf-b67b-00ccc0144871-kube-api-access-9ntc2\") pod \"controller-manager-879f6c89f-xrpb6\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.683836 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcg84\" (UniqueName: \"kubernetes.io/projected/549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc-kube-api-access-gcg84\") pod \"cluster-samples-operator-665b6dd947-jskfz\" (UID: \"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.695762 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dscrg\" (UniqueName: \"kubernetes.io/projected/22582027-e646-48aa-a5f5-7fb50f199830-kube-api-access-dscrg\") pod \"apiserver-7bbb656c7d-pbrtc\" (UID: \"22582027-e646-48aa-a5f5-7fb50f199830\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.716454 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fbhk\" (UniqueName: \"kubernetes.io/projected/f19c9286-752d-420e-83bb-010eefd59ea1-kube-api-access-5fbhk\") pod \"collect-profiles-29531520-9rhnh\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.717284 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.730406 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.750606 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-tx4dr"] Feb 24 00:07:59 crc kubenswrapper[4756]: W0224 00:07:59.763982 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8083f569_75ed_42a3_aee0_3590a86f4329.slice/crio-625342c86e7043afa699203fb90e873fbb80627c85dd96d4ba7c0983a4b4a33b WatchSource:0}: Error finding container 625342c86e7043afa699203fb90e873fbb80627c85dd96d4ba7c0983a4b4a33b: Status 404 returned error can't find the container with id 625342c86e7043afa699203fb90e873fbb80627c85dd96d4ba7c0983a4b4a33b Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.806507 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.807788 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.807871 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d56473-d779-43ac-a1a0-2924bab188f5-config\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.807901 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-certificates\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.807942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.807965 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqncn\" (UniqueName: \"kubernetes.io/projected/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-kube-api-access-nqncn\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.807989 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8ec53f8e-ccec-4f40-ba05-50816a70be2e-tmpfs\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808012 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-srv-cert\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808034 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808056 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02033419-96e3-46ba-a650-d528bf50492e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808104 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-bound-sa-token\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808128 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w56qt\" (UniqueName: \"kubernetes.io/projected/42c3019f-04ad-4a2e-af4d-411b5c0c9392-kube-api-access-w56qt\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808152 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-config\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808178 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jss6\" (UniqueName: \"kubernetes.io/projected/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-kube-api-access-2jss6\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808201 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r2lh\" (UniqueName: \"kubernetes.io/projected/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-kube-api-access-5r2lh\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808223 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04c98116-2117-4e89-ac51-cf5ee7e8df6b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808245 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djdt8\" (UniqueName: \"kubernetes.io/projected/0707cda5-590e-4bc2-a966-970f98336869-kube-api-access-djdt8\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808267 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ec53f8e-ccec-4f40-ba05-50816a70be2e-apiservice-cert\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808293 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7mj\" (UniqueName: \"kubernetes.io/projected/f2edfb46-22d6-4708-ab98-7e51b058dc0c-kube-api-access-wq7mj\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808335 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551919c0-dd7a-4002-97f9-29f554d75179-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808357 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ws6\" (UniqueName: \"kubernetes.io/projected/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-kube-api-access-f6ws6\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808417 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02033419-96e3-46ba-a650-d528bf50492e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808443 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2edfb46-22d6-4708-ab98-7e51b058dc0c-serving-cert\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808477 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m55m2\" (UniqueName: \"kubernetes.io/projected/7f127f66-2445-4615-8949-1a2e70a902c0-kube-api-access-m55m2\") pod \"control-plane-machine-set-operator-78cbb6b69f-ppwx6\" (UID: \"7f127f66-2445-4615-8949-1a2e70a902c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808503 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02033419-96e3-46ba-a650-d528bf50492e-config\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808544 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-dir\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808590 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-ca\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808618 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f127f66-2445-4615-8949-1a2e70a902c0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ppwx6\" (UID: \"7f127f66-2445-4615-8949-1a2e70a902c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808655 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808681 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808721 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9m2k\" (UniqueName: \"kubernetes.io/projected/c7d56473-d779-43ac-a1a0-2924bab188f5-kube-api-access-t9m2k\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808744 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lprn9\" (UniqueName: \"kubernetes.io/projected/09d326ee-0878-4f58-b935-9405228e3008-kube-api-access-lprn9\") pod \"dns-operator-744455d44c-tznzq\" (UID: \"09d326ee-0878-4f58-b935-9405228e3008\") " pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808782 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gqs\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-kube-api-access-j6gqs\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808804 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808831 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76547\" (UniqueName: \"kubernetes.io/projected/551919c0-dd7a-4002-97f9-29f554d75179-kube-api-access-76547\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808856 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808886 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.808926 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d326ee-0878-4f58-b935-9405228e3008-metrics-tls\") pod \"dns-operator-744455d44c-tznzq\" (UID: \"09d326ee-0878-4f58-b935-9405228e3008\") " pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" Feb 24 00:07:59 crc kubenswrapper[4756]: E0224 00:07:59.810188 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.31016308 +0000 UTC m=+137.221025883 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.810439 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04c98116-2117-4e89-ac51-cf5ee7e8df6b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817231 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-policies\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817291 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817317 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817337 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0707cda5-590e-4bc2-a966-970f98336869-serving-cert\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817358 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-config\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817377 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c98116-2117-4e89-ac51-cf5ee7e8df6b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817421 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-service-ca\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817441 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817467 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c3019f-04ad-4a2e-af4d-411b5c0c9392-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817485 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f577e511-2dda-45d6-912e-80c6e01f4ab5-serving-cert\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817597 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f20c38d-f4f0-47ea-a755-54328dc8fa90-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hqsxk\" (UID: \"1f20c38d-f4f0-47ea-a755-54328dc8fa90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817632 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.817908 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.818176 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-trusted-ca\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.818476 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8fw\" (UniqueName: \"kubernetes.io/projected/1f20c38d-f4f0-47ea-a755-54328dc8fa90-kube-api-access-cd8fw\") pod \"package-server-manager-789f6589d5-hqsxk\" (UID: \"1f20c38d-f4f0-47ea-a755-54328dc8fa90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.818520 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c3019f-04ad-4a2e-af4d-411b5c0c9392-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.818760 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7d56473-d779-43ac-a1a0-2924bab188f5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.818795 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9n6j\" (UniqueName: \"kubernetes.io/projected/8ec53f8e-ccec-4f40-ba05-50816a70be2e-kube-api-access-m9n6j\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.818814 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.818880 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgjkl\" (UniqueName: \"kubernetes.io/projected/7561977d-310e-468c-9240-58f94bfb8227-kube-api-access-tgjkl\") pod \"migrator-59844c95c7-b246x\" (UID: \"7561977d-310e-468c-9240-58f94bfb8227\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.819894 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-client\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.819955 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.819985 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c7d56473-d779-43ac-a1a0-2924bab188f5-images\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820230 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-srv-cert\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820427 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4cx6\" (UniqueName: \"kubernetes.io/projected/f577e511-2dda-45d6-912e-80c6e01f4ab5-kube-api-access-n4cx6\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820651 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-tls\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820676 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820694 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820733 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ec53f8e-ccec-4f40-ba05-50816a70be2e-webhook-cert\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820763 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820794 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2edfb46-22d6-4708-ab98-7e51b058dc0c-trusted-ca\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.820984 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2edfb46-22d6-4708-ab98-7e51b058dc0c-config\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.821005 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.821025 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.821086 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.821373 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551919c0-dd7a-4002-97f9-29f554d75179-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.825665 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz"] Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.888012 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl"] Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.910469 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.921832 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922100 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f20c38d-f4f0-47ea-a755-54328dc8fa90-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hqsxk\" (UID: \"1f20c38d-f4f0-47ea-a755-54328dc8fa90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922125 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922152 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922178 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-trusted-ca\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922196 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd8fw\" (UniqueName: \"kubernetes.io/projected/1f20c38d-f4f0-47ea-a755-54328dc8fa90-kube-api-access-cd8fw\") pod \"package-server-manager-789f6589d5-hqsxk\" (UID: \"1f20c38d-f4f0-47ea-a755-54328dc8fa90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922214 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c3019f-04ad-4a2e-af4d-411b5c0c9392-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922239 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdtbx\" (UniqueName: \"kubernetes.io/projected/11fea2b9-4369-4534-8101-5fc365d29723-kube-api-access-pdtbx\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922260 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/044b4398-5b40-4f37-9ced-33b966b391df-proxy-tls\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922277 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74fce8ed-7252-4b7c-96cb-e1e217504825-signing-cabundle\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922294 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8ac651-47dc-4e2c-97e0-35c3d5fa3b71-cert\") pod \"ingress-canary-d7npx\" (UID: \"db8ac651-47dc-4e2c-97e0-35c3d5fa3b71\") " pod="openshift-ingress-canary/ingress-canary-d7npx" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922312 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7d56473-d779-43ac-a1a0-2924bab188f5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922334 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-mountpoint-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922349 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfqj\" (UniqueName: \"kubernetes.io/projected/74fce8ed-7252-4b7c-96cb-e1e217504825-kube-api-access-hqfqj\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922369 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9n6j\" (UniqueName: \"kubernetes.io/projected/8ec53f8e-ccec-4f40-ba05-50816a70be2e-kube-api-access-m9n6j\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922388 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922405 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94c58783-dca0-46ea-8aae-09314c370046-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922426 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgjkl\" (UniqueName: \"kubernetes.io/projected/7561977d-310e-468c-9240-58f94bfb8227-kube-api-access-tgjkl\") pod \"migrator-59844c95c7-b246x\" (UID: \"7561977d-310e-468c-9240-58f94bfb8227\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922445 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14f4a93-e48b-4b1a-aa50-89e55755f479-config\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922471 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-client\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922492 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922508 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c7d56473-d779-43ac-a1a0-2924bab188f5-images\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922524 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-srv-cert\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922545 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4cx6\" (UniqueName: \"kubernetes.io/projected/f577e511-2dda-45d6-912e-80c6e01f4ab5-kube-api-access-n4cx6\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922563 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-registration-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922602 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-socket-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922620 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lc9z\" (UniqueName: \"kubernetes.io/projected/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-kube-api-access-2lc9z\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922640 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922670 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-tls\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922688 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922708 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ec53f8e-ccec-4f40-ba05-50816a70be2e-webhook-cert\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922726 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922743 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2edfb46-22d6-4708-ab98-7e51b058dc0c-trusted-ca\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922760 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhc9\" (UniqueName: \"kubernetes.io/projected/b3c7db43-a972-4077-94c9-d5d26f9bfabc-kube-api-access-9lhc9\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922780 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922797 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2edfb46-22d6-4708-ab98-7e51b058dc0c-config\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922816 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922835 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922852 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ln46\" (UniqueName: \"kubernetes.io/projected/94c58783-dca0-46ea-8aae-09314c370046-kube-api-access-8ln46\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922867 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922894 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551919c0-dd7a-4002-97f9-29f554d75179-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922913 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shcpn\" (UniqueName: \"kubernetes.io/projected/454c827a-ccb7-4b43-ac8a-bd1b38e77616-kube-api-access-shcpn\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922929 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5143125e-0b6e-417a-92de-e9b0c641713a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922960 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94c58783-dca0-46ea-8aae-09314c370046-proxy-tls\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922979 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.922997 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94c58783-dca0-46ea-8aae-09314c370046-images\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.923056 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d56473-d779-43ac-a1a0-2924bab188f5-config\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.923994 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-certificates\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924043 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-plugins-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924089 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924108 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-stats-auth\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924124 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5143125e-0b6e-417a-92de-e9b0c641713a-config\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924161 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqncn\" (UniqueName: \"kubernetes.io/projected/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-kube-api-access-nqncn\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924179 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8ec53f8e-ccec-4f40-ba05-50816a70be2e-tmpfs\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924197 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-srv-cert\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924213 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924330 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02033419-96e3-46ba-a650-d528bf50492e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924351 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-bound-sa-token\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924370 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w56qt\" (UniqueName: \"kubernetes.io/projected/42c3019f-04ad-4a2e-af4d-411b5c0c9392-kube-api-access-w56qt\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924407 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-config\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924446 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r2lh\" (UniqueName: \"kubernetes.io/projected/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-kube-api-access-5r2lh\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924483 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jss6\" (UniqueName: \"kubernetes.io/projected/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-kube-api-access-2jss6\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924486 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-ca-trust-extracted\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924500 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2dc2\" (UniqueName: \"kubernetes.io/projected/482daac8-f9ac-43d2-a101-cf64acefa9d3-kube-api-access-k2dc2\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924635 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04c98116-2117-4e89-ac51-cf5ee7e8df6b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924669 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-node-bootstrap-token\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924715 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djdt8\" (UniqueName: \"kubernetes.io/projected/0707cda5-590e-4bc2-a966-970f98336869-kube-api-access-djdt8\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924738 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-968c9\" (UniqueName: \"kubernetes.io/projected/044b4398-5b40-4f37-9ced-33b966b391df-kube-api-access-968c9\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924958 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ec53f8e-ccec-4f40-ba05-50816a70be2e-apiservice-cert\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.924982 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7mj\" (UniqueName: \"kubernetes.io/projected/f2edfb46-22d6-4708-ab98-7e51b058dc0c-kube-api-access-wq7mj\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925020 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j98f6\" (UniqueName: \"kubernetes.io/projected/db8ac651-47dc-4e2c-97e0-35c3d5fa3b71-kube-api-access-j98f6\") pod \"ingress-canary-d7npx\" (UID: \"db8ac651-47dc-4e2c-97e0-35c3d5fa3b71\") " pod="openshift-ingress-canary/ingress-canary-d7npx" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925097 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551919c0-dd7a-4002-97f9-29f554d75179-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925119 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/044b4398-5b40-4f37-9ced-33b966b391df-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925160 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6ws6\" (UniqueName: \"kubernetes.io/projected/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-kube-api-access-f6ws6\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925192 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02033419-96e3-46ba-a650-d528bf50492e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925212 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/454c827a-ccb7-4b43-ac8a-bd1b38e77616-service-ca-bundle\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925237 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2edfb46-22d6-4708-ab98-7e51b058dc0c-serving-cert\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925255 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5143125e-0b6e-417a-92de-e9b0c641713a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925277 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m55m2\" (UniqueName: \"kubernetes.io/projected/7f127f66-2445-4615-8949-1a2e70a902c0-kube-api-access-m55m2\") pod \"control-plane-machine-set-operator-78cbb6b69f-ppwx6\" (UID: \"7f127f66-2445-4615-8949-1a2e70a902c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925296 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02033419-96e3-46ba-a650-d528bf50492e-config\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925314 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcd7\" (UniqueName: \"kubernetes.io/projected/ba791051-d526-439d-839b-7b84623e52f1-kube-api-access-jfcd7\") pod \"multus-admission-controller-857f4d67dd-cvvhf\" (UID: \"ba791051-d526-439d-839b-7b84623e52f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925354 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-dir\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925372 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-default-certificate\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925395 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3c7db43-a972-4077-94c9-d5d26f9bfabc-metrics-tls\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925427 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-ca\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925446 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d14f4a93-e48b-4b1a-aa50-89e55755f479-serving-cert\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925463 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-metrics-certs\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925496 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925532 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f127f66-2445-4615-8949-1a2e70a902c0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ppwx6\" (UID: \"7f127f66-2445-4615-8949-1a2e70a902c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925567 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925588 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9m2k\" (UniqueName: \"kubernetes.io/projected/c7d56473-d779-43ac-a1a0-2924bab188f5-kube-api-access-t9m2k\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925607 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba791051-d526-439d-839b-7b84623e52f1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cvvhf\" (UID: \"ba791051-d526-439d-839b-7b84623e52f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925626 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lprn9\" (UniqueName: \"kubernetes.io/projected/09d326ee-0878-4f58-b935-9405228e3008-kube-api-access-lprn9\") pod \"dns-operator-744455d44c-tznzq\" (UID: \"09d326ee-0878-4f58-b935-9405228e3008\") " pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925649 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6gqs\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-kube-api-access-j6gqs\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925668 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925688 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76547\" (UniqueName: \"kubernetes.io/projected/551919c0-dd7a-4002-97f9-29f554d75179-kube-api-access-76547\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925708 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925728 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925744 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74fce8ed-7252-4b7c-96cb-e1e217504825-signing-key\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925764 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04c98116-2117-4e89-ac51-cf5ee7e8df6b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925783 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-policies\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925811 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d326ee-0878-4f58-b935-9405228e3008-metrics-tls\") pod \"dns-operator-744455d44c-tznzq\" (UID: \"09d326ee-0878-4f58-b935-9405228e3008\") " pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925829 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925846 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-certs\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925876 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925896 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0707cda5-590e-4bc2-a966-970f98336869-serving-cert\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925916 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-config\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925934 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c98116-2117-4e89-ac51-cf5ee7e8df6b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925955 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3c7db43-a972-4077-94c9-d5d26f9bfabc-config-volume\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.925988 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-service-ca\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.926007 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.926041 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c3019f-04ad-4a2e-af4d-411b5c0c9392-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.926062 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f577e511-2dda-45d6-912e-80c6e01f4ab5-serving-cert\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.926155 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2vd\" (UniqueName: \"kubernetes.io/projected/d14f4a93-e48b-4b1a-aa50-89e55755f479-kube-api-access-rd2vd\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.926174 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-csi-data-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.929205 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-tls\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.929807 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.930516 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-certificates\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.932775 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.933306 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.933656 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8ec53f8e-ccec-4f40-ba05-50816a70be2e-tmpfs\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.933670 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1f20c38d-f4f0-47ea-a755-54328dc8fa90-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hqsxk\" (UID: \"1f20c38d-f4f0-47ea-a755-54328dc8fa90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.933878 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.934601 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c3019f-04ad-4a2e-af4d-411b5c0c9392-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.935420 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.935878 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-service-ca\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.936507 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-policies\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.937423 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-service-ca-bundle\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.937606 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0707cda5-590e-4bc2-a966-970f98336869-config\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.937844 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.937954 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-config\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.938533 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-ca\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.938703 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.939195 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551919c0-dd7a-4002-97f9-29f554d75179-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.939340 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.943233 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7d56473-d779-43ac-a1a0-2924bab188f5-config\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.943732 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.943978 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c7d56473-d779-43ac-a1a0-2924bab188f5-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.944441 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/09d326ee-0878-4f58-b935-9405228e3008-metrics-tls\") pod \"dns-operator-744455d44c-tznzq\" (UID: \"09d326ee-0878-4f58-b935-9405228e3008\") " pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.944547 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-profile-collector-cert\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.944701 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04c98116-2117-4e89-ac51-cf5ee7e8df6b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.944887 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8ec53f8e-ccec-4f40-ba05-50816a70be2e-webhook-cert\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.945095 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02033419-96e3-46ba-a650-d528bf50492e-config\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.945475 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.945561 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.945882 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-dir\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.946681 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f2edfb46-22d6-4708-ab98-7e51b058dc0c-trusted-ca\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.947366 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2edfb46-22d6-4708-ab98-7e51b058dc0c-config\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.948366 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: E0224 00:07:59.949199 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.449177647 +0000 UTC m=+137.360040280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.950318 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.951642 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f127f66-2445-4615-8949-1a2e70a902c0-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-ppwx6\" (UID: \"7f127f66-2445-4615-8949-1a2e70a902c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.951888 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-trusted-ca\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.953131 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8ec53f8e-ccec-4f40-ba05-50816a70be2e-apiservice-cert\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.953253 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c7d56473-d779-43ac-a1a0-2924bab188f5-images\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.953517 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-srv-cert\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.953799 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.953824 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551919c0-dd7a-4002-97f9-29f554d75179-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.955165 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04c98116-2117-4e89-ac51-cf5ee7e8df6b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.957367 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f577e511-2dda-45d6-912e-80c6e01f4ab5-etcd-client\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.957367 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c3019f-04ad-4a2e-af4d-411b5c0c9392-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.957573 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0707cda5-590e-4bc2-a966-970f98336869-serving-cert\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.957928 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2edfb46-22d6-4708-ab98-7e51b058dc0c-serving-cert\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.958311 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f577e511-2dda-45d6-912e-80c6e01f4ab5-serving-cert\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.958933 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-installation-pull-secrets\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.954506 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-srv-cert\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.959717 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02033419-96e3-46ba-a650-d528bf50492e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.963625 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:07:59 crc kubenswrapper[4756]: I0224 00:07:59.979430 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02033419-96e3-46ba-a650-d528bf50492e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-4k4gq\" (UID: \"02033419-96e3-46ba-a650-d528bf50492e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.011138 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6gqs\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-kube-api-access-j6gqs\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.022648 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.028683 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2dc2\" (UniqueName: \"kubernetes.io/projected/482daac8-f9ac-43d2-a101-cf64acefa9d3-kube-api-access-k2dc2\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.028938 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-node-bootstrap-token\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.029723 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-968c9\" (UniqueName: \"kubernetes.io/projected/044b4398-5b40-4f37-9ced-33b966b391df-kube-api-access-968c9\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.029768 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j98f6\" (UniqueName: \"kubernetes.io/projected/db8ac651-47dc-4e2c-97e0-35c3d5fa3b71-kube-api-access-j98f6\") pod \"ingress-canary-d7npx\" (UID: \"db8ac651-47dc-4e2c-97e0-35c3d5fa3b71\") " pod="openshift-ingress-canary/ingress-canary-d7npx" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.029840 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/044b4398-5b40-4f37-9ced-33b966b391df-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.029895 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/454c827a-ccb7-4b43-ac8a-bd1b38e77616-service-ca-bundle\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.029915 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5143125e-0b6e-417a-92de-e9b0c641713a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.030724 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/454c827a-ccb7-4b43-ac8a-bd1b38e77616-service-ca-bundle\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031054 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfcd7\" (UniqueName: \"kubernetes.io/projected/ba791051-d526-439d-839b-7b84623e52f1-kube-api-access-jfcd7\") pod \"multus-admission-controller-857f4d67dd-cvvhf\" (UID: \"ba791051-d526-439d-839b-7b84623e52f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031110 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-default-certificate\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031131 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d14f4a93-e48b-4b1a-aa50-89e55755f479-serving-cert\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031173 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-metrics-certs\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031210 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3c7db43-a972-4077-94c9-d5d26f9bfabc-metrics-tls\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031228 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031273 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031294 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba791051-d526-439d-839b-7b84623e52f1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cvvhf\" (UID: \"ba791051-d526-439d-839b-7b84623e52f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031347 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74fce8ed-7252-4b7c-96cb-e1e217504825-signing-key\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031369 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-certs\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031126 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/044b4398-5b40-4f37-9ced-33b966b391df-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031841 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3c7db43-a972-4077-94c9-d5d26f9bfabc-config-volume\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031905 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2vd\" (UniqueName: \"kubernetes.io/projected/d14f4a93-e48b-4b1a-aa50-89e55755f479-kube-api-access-rd2vd\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.031926 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-csi-data-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032003 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdtbx\" (UniqueName: \"kubernetes.io/projected/11fea2b9-4369-4534-8101-5fc365d29723-kube-api-access-pdtbx\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032026 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/044b4398-5b40-4f37-9ced-33b966b391df-proxy-tls\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032058 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74fce8ed-7252-4b7c-96cb-e1e217504825-signing-cabundle\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032089 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8ac651-47dc-4e2c-97e0-35c3d5fa3b71-cert\") pod \"ingress-canary-d7npx\" (UID: \"db8ac651-47dc-4e2c-97e0-35c3d5fa3b71\") " pod="openshift-ingress-canary/ingress-canary-d7npx" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032108 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-mountpoint-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032141 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqfqj\" (UniqueName: \"kubernetes.io/projected/74fce8ed-7252-4b7c-96cb-e1e217504825-kube-api-access-hqfqj\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032194 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94c58783-dca0-46ea-8aae-09314c370046-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032235 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14f4a93-e48b-4b1a-aa50-89e55755f479-config\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032299 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-registration-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032327 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-socket-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032346 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lc9z\" (UniqueName: \"kubernetes.io/projected/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-kube-api-access-2lc9z\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032394 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhc9\" (UniqueName: \"kubernetes.io/projected/b3c7db43-a972-4077-94c9-d5d26f9bfabc-kube-api-access-9lhc9\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032438 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ln46\" (UniqueName: \"kubernetes.io/projected/94c58783-dca0-46ea-8aae-09314c370046-kube-api-access-8ln46\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032483 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.032509 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shcpn\" (UniqueName: \"kubernetes.io/projected/454c827a-ccb7-4b43-ac8a-bd1b38e77616-kube-api-access-shcpn\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.034402 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqncn\" (UniqueName: \"kubernetes.io/projected/89fb1442-f37a-40d6-ab9c-34e8351d2cdb-kube-api-access-nqncn\") pod \"olm-operator-6b444d44fb-r9bdf\" (UID: \"89fb1442-f37a-40d6-ab9c-34e8351d2cdb\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.035412 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3c7db43-a972-4077-94c9-d5d26f9bfabc-config-volume\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.035918 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d14f4a93-e48b-4b1a-aa50-89e55755f479-config\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.036460 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b3c7db43-a972-4077-94c9-d5d26f9bfabc-metrics-tls\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.036584 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-csi-data-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.036612 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/74fce8ed-7252-4b7c-96cb-e1e217504825-signing-cabundle\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.036643 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-mountpoint-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.036758 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-metrics-certs\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.037286 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94c58783-dca0-46ea-8aae-09314c370046-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.037386 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-registration-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.037390 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-default-certificate\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.037529 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d14f4a93-e48b-4b1a-aa50-89e55755f479-serving-cert\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.037639 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-socket-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038371 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-node-bootstrap-token\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038472 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5143125e-0b6e-417a-92de-e9b0c641713a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038563 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5143125e-0b6e-417a-92de-e9b0c641713a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.038643 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.538620484 +0000 UTC m=+137.449483117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038667 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94c58783-dca0-46ea-8aae-09314c370046-proxy-tls\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038702 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94c58783-dca0-46ea-8aae-09314c370046-images\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038729 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-plugins-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038754 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-stats-auth\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038773 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5143125e-0b6e-417a-92de-e9b0c641713a-config\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038946 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-certs\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.038997 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/482daac8-f9ac-43d2-a101-cf64acefa9d3-plugins-dir\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.039388 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5143125e-0b6e-417a-92de-e9b0c641713a-config\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.039592 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94c58783-dca0-46ea-8aae-09314c370046-images\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.041631 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.043621 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/db8ac651-47dc-4e2c-97e0-35c3d5fa3b71-cert\") pod \"ingress-canary-d7npx\" (UID: \"db8ac651-47dc-4e2c-97e0-35c3d5fa3b71\") " pod="openshift-ingress-canary/ingress-canary-d7npx" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.045033 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.047164 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ba791051-d526-439d-839b-7b84623e52f1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-cvvhf\" (UID: \"ba791051-d526-439d-839b-7b84623e52f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.050643 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/94c58783-dca0-46ea-8aae-09314c370046-proxy-tls\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.051619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/74fce8ed-7252-4b7c-96cb-e1e217504825-signing-key\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.054838 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/044b4398-5b40-4f37-9ced-33b966b391df-proxy-tls\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.056525 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/454c827a-ccb7-4b43-ac8a-bd1b38e77616-stats-auth\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.058056 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29531520-pdn56"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.059816 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.060609 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6ws6\" (UniqueName: \"kubernetes.io/projected/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-kube-api-access-f6ws6\") pod \"oauth-openshift-558db77b4-5qkbl\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.077138 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd8fw\" (UniqueName: \"kubernetes.io/projected/1f20c38d-f4f0-47ea-a755-54328dc8fa90-kube-api-access-cd8fw\") pod \"package-server-manager-789f6589d5-hqsxk\" (UID: \"1f20c38d-f4f0-47ea-a755-54328dc8fa90\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.082720 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.096969 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9m2k\" (UniqueName: \"kubernetes.io/projected/c7d56473-d779-43ac-a1a0-2924bab188f5-kube-api-access-t9m2k\") pod \"machine-api-operator-5694c8668f-dm5nw\" (UID: \"c7d56473-d779-43ac-a1a0-2924bab188f5\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.098513 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.118158 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lprn9\" (UniqueName: \"kubernetes.io/projected/09d326ee-0878-4f58-b935-9405228e3008-kube-api-access-lprn9\") pod \"dns-operator-744455d44c-tznzq\" (UID: \"09d326ee-0878-4f58-b935-9405228e3008\") " pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.119983 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.136729 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w56qt\" (UniqueName: \"kubernetes.io/projected/42c3019f-04ad-4a2e-af4d-411b5c0c9392-kube-api-access-w56qt\") pod \"openshift-controller-manager-operator-756b6f6bc6-2n9c6\" (UID: \"42c3019f-04ad-4a2e-af4d-411b5c0c9392\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.139936 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.140103 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.640059575 +0000 UTC m=+137.550933538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.140265 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.140635 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.640627234 +0000 UTC m=+137.551489857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.156228 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04c98116-2117-4e89-ac51-cf5ee7e8df6b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zkb66\" (UID: \"04c98116-2117-4e89-ac51-cf5ee7e8df6b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.169514 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.176126 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgjkl\" (UniqueName: \"kubernetes.io/projected/7561977d-310e-468c-9240-58f94bfb8227-kube-api-access-tgjkl\") pod \"migrator-59844c95c7-b246x\" (UID: \"7561977d-310e-468c-9240-58f94bfb8227\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.192869 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-24rzj"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.193181 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7mj\" (UniqueName: \"kubernetes.io/projected/f2edfb46-22d6-4708-ab98-7e51b058dc0c-kube-api-access-wq7mj\") pod \"console-operator-58897d9998-f6x7l\" (UID: \"f2edfb46-22d6-4708-ab98-7e51b058dc0c\") " pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.206098 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4jtsc"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.209732 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.221972 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-bound-sa-token\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.239089 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9n6j\" (UniqueName: \"kubernetes.io/projected/8ec53f8e-ccec-4f40-ba05-50816a70be2e-kube-api-access-m9n6j\") pod \"packageserver-d55dfcdfc-bsdsk\" (UID: \"8ec53f8e-ccec-4f40-ba05-50816a70be2e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.243089 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.243278 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.743255786 +0000 UTC m=+137.654118419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.243438 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.243857 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.743848316 +0000 UTC m=+137.654710949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.263519 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jss6\" (UniqueName: \"kubernetes.io/projected/5ecf9e21-1dc7-4523-a278-5edcefaccd3f-kube-api-access-2jss6\") pod \"ingress-operator-5b745b69d9-zvbpv\" (UID: \"5ecf9e21-1dc7-4523-a278-5edcefaccd3f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.275006 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.276806 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m55m2\" (UniqueName: \"kubernetes.io/projected/7f127f66-2445-4615-8949-1a2e70a902c0-kube-api-access-m55m2\") pod \"control-plane-machine-set-operator-78cbb6b69f-ppwx6\" (UID: \"7f127f66-2445-4615-8949-1a2e70a902c0\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.300562 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r2lh\" (UniqueName: \"kubernetes.io/projected/6f8b74a7-cfe2-4675-ad1c-3990da3d401c-kube-api-access-5r2lh\") pod \"catalog-operator-68c6474976-5lvff\" (UID: \"6f8b74a7-cfe2-4675-ad1c-3990da3d401c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.309529 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.309919 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xrpb6"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.316759 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4cx6\" (UniqueName: \"kubernetes.io/projected/f577e511-2dda-45d6-912e-80c6e01f4ab5-kube-api-access-n4cx6\") pod \"etcd-operator-b45778765-7qp8v\" (UID: \"f577e511-2dda-45d6-912e-80c6e01f4ab5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.318401 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-988lm"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.331175 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.338711 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djdt8\" (UniqueName: \"kubernetes.io/projected/0707cda5-590e-4bc2-a966-970f98336869-kube-api-access-djdt8\") pod \"authentication-operator-69f744f599-dhtw6\" (UID: \"0707cda5-590e-4bc2-a966-970f98336869\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.338753 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.344768 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.345440 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.845416211 +0000 UTC m=+137.756278844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.357504 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.362855 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76547\" (UniqueName: \"kubernetes.io/projected/551919c0-dd7a-4002-97f9-29f554d75179-kube-api-access-76547\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwmkd\" (UID: \"551919c0-dd7a-4002-97f9-29f554d75179\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.367533 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.375144 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.394492 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-tznzq"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.402108 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.403226 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2dc2\" (UniqueName: \"kubernetes.io/projected/482daac8-f9ac-43d2-a101-cf64acefa9d3-kube-api-access-k2dc2\") pod \"csi-hostpathplugin-gdn5q\" (UID: \"482daac8-f9ac-43d2-a101-cf64acefa9d3\") " pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.409676 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.413505 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.422237 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-968c9\" (UniqueName: \"kubernetes.io/projected/044b4398-5b40-4f37-9ced-33b966b391df-kube-api-access-968c9\") pod \"machine-config-controller-84d6567774-z8lrn\" (UID: \"044b4398-5b40-4f37-9ced-33b966b391df\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.426833 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.434173 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.437184 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j98f6\" (UniqueName: \"kubernetes.io/projected/db8ac651-47dc-4e2c-97e0-35c3d5fa3b71-kube-api-access-j98f6\") pod \"ingress-canary-d7npx\" (UID: \"db8ac651-47dc-4e2c-97e0-35c3d5fa3b71\") " pod="openshift-ingress-canary/ingress-canary-d7npx" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.441621 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.446314 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.446900 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:00.946872663 +0000 UTC m=+137.857735466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.450816 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.461673 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfcd7\" (UniqueName: \"kubernetes.io/projected/ba791051-d526-439d-839b-7b84623e52f1-kube-api-access-jfcd7\") pod \"multus-admission-controller-857f4d67dd-cvvhf\" (UID: \"ba791051-d526-439d-839b-7b84623e52f1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.474294 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.478036 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2vd\" (UniqueName: \"kubernetes.io/projected/d14f4a93-e48b-4b1a-aa50-89e55755f479-kube-api-access-rd2vd\") pod \"service-ca-operator-777779d784-q8m47\" (UID: \"d14f4a93-e48b-4b1a-aa50-89e55755f479\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.497539 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqfqj\" (UniqueName: \"kubernetes.io/projected/74fce8ed-7252-4b7c-96cb-e1e217504825-kube-api-access-hqfqj\") pod \"service-ca-9c57cc56f-x4zw8\" (UID: \"74fce8ed-7252-4b7c-96cb-e1e217504825\") " pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.515420 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.518348 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-d7npx" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.518658 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.518673 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lc9z\" (UniqueName: \"kubernetes.io/projected/6a4a3180-9cf3-4efd-a188-0b011c91c1c9-kube-api-access-2lc9z\") pod \"machine-config-server-2mxpf\" (UID: \"6a4a3180-9cf3-4efd-a188-0b011c91c1c9\") " pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.543763 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ln46\" (UniqueName: \"kubernetes.io/projected/94c58783-dca0-46ea-8aae-09314c370046-kube-api-access-8ln46\") pod \"machine-config-operator-74547568cd-9nrzl\" (UID: \"94c58783-dca0-46ea-8aae-09314c370046\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.545196 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.547149 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.547237 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.047214795 +0000 UTC m=+137.958077428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.547422 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.547880 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.047865848 +0000 UTC m=+137.958728481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.554294 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2mxpf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.557496 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.559195 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhc9\" (UniqueName: \"kubernetes.io/projected/b3c7db43-a972-4077-94c9-d5d26f9bfabc-kube-api-access-9lhc9\") pod \"dns-default-qrnfg\" (UID: \"b3c7db43-a972-4077-94c9-d5d26f9bfabc\") " pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.562588 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.564393 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-24rzj" event={"ID":"735c2ab3-4d91-4906-81f6-77224425b729","Type":"ContainerStarted","Data":"eb2acb3665a53c94137167b344fb6650743ab33399d700a5f8789b8420d2953d"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.565961 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4jtsc" event={"ID":"f79308a5-8560-4d7d-9180-92f05193a4ce","Type":"ContainerStarted","Data":"e296faaac1d185c76566bdb59329031bee04f9761337ec4ca1dc493e0ee0d410"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.567595 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" event={"ID":"7eb6916d-1721-48bf-b67b-00ccc0144871","Type":"ContainerStarted","Data":"8c8aa9128f6d1fc38b00876a261ebf98be88b6bd08fe7dfec32345e00608e447"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.570080 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" event={"ID":"c5de838f-c611-4eea-8e8c-51016e473942","Type":"ContainerStarted","Data":"f32de7ca2334efc78a41e96f213ebcc36695314d2255d3bc2c28b4a750ee8883"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.570101 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" event={"ID":"c5de838f-c611-4eea-8e8c-51016e473942","Type":"ContainerStarted","Data":"7d0bda5b0fc42e534292ad31f5f90c1bda65474fc5b18833fbe5ed1172ce4aa1"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.575125 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.586968 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdtbx\" (UniqueName: \"kubernetes.io/projected/11fea2b9-4369-4534-8101-5fc365d29723-kube-api-access-pdtbx\") pod \"marketplace-operator-79b997595-j7xwn\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.595909 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shcpn\" (UniqueName: \"kubernetes.io/projected/454c827a-ccb7-4b43-ac8a-bd1b38e77616-kube-api-access-shcpn\") pod \"router-default-5444994796-522ht\" (UID: \"454c827a-ccb7-4b43-ac8a-bd1b38e77616\") " pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.648047 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.648409 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.148394187 +0000 UTC m=+138.059256820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.655972 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5143125e-0b6e-417a-92de-e9b0c641713a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d8nhs\" (UID: \"5143125e-0b6e-417a-92de-e9b0c641713a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.657427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" event={"ID":"f19c9286-752d-420e-83bb-010eefd59ea1","Type":"ContainerStarted","Data":"a6b61d0dda76bf5a41ed79026064eecc5dfdd720efee4231b1aceefe917f7b5c"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.666967 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5qkbl"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.691732 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-pdn56" event={"ID":"68e02cbc-c03c-45cd-916a-16dd3b0052cd","Type":"ContainerStarted","Data":"a7e922e372e3aa3ac209722cafacddf1998a86c937e1b9c1a766a2ac48852521"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.691786 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-pdn56" event={"ID":"68e02cbc-c03c-45cd-916a-16dd3b0052cd","Type":"ContainerStarted","Data":"32d56113c6686933c1913c1dc18fa0eaf88cc4683ef89bfe832b545f1eeabf55"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.716360 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f6x7l"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.724663 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" event={"ID":"b17dc2ee-4e0e-4c28-a410-0ac418884f44","Type":"ContainerStarted","Data":"738f27360c7f81bc8b37873443f33e4067eb0f11f752fcaf647f109a94fa3acd"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.724915 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" event={"ID":"b17dc2ee-4e0e-4c28-a410-0ac418884f44","Type":"ContainerStarted","Data":"4c483b1cc8a963a56170f80dcd55e7e4549c2708ff139ef7271337f04beff534"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.733332 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" event={"ID":"344d0ffc-25ff-4503-a029-129a7e178a11","Type":"ContainerStarted","Data":"1ed00cc7a370b9408d46aac49c835895425575614cf41fcd795bc7bc878c6342"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.733387 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" event={"ID":"344d0ffc-25ff-4503-a029-129a7e178a11","Type":"ContainerStarted","Data":"c46a212f5d2e47084513d248b3c27be94e4b28bbec866fd6b157a0caf402e8ac"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.735142 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.738496 4756 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zldwl container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.738643 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" podUID="344d0ffc-25ff-4503-a029-129a7e178a11" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.739731 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" event={"ID":"3b1b8164-4f03-4067-b349-265636839558","Type":"ContainerStarted","Data":"4929eda07bf6f1f493cd2fbc54759a07b85735b3d10ed7e9ebb8d91de1b3844e"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.739779 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" event={"ID":"3b1b8164-4f03-4067-b349-265636839558","Type":"ContainerStarted","Data":"2534c3412c1d7a8068af5dcd7c7257373eded39354782f4b7f1b495dbffc5bf9"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.749137 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" event={"ID":"22582027-e646-48aa-a5f5-7fb50f199830","Type":"ContainerStarted","Data":"c64c629b96bb0cecbd7286182108b9ff9eb46c583699b351ccb323035c4068d4"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.753356 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.754042 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.254018942 +0000 UTC m=+138.164881575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.754192 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.756653 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.756806 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" event={"ID":"09d326ee-0878-4f58-b935-9405228e3008","Type":"ContainerStarted","Data":"d0fbbcd0f1e5fec923d91df70f9313b88e8c2d0add6d880e36e4d2f088ce622e"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.759712 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.760554 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" event={"ID":"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2","Type":"ContainerStarted","Data":"2995869d27ca954ee33f0e53d9ea01c570c6845c28de20ee0c4ca8419b6ea85e"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.778456 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.778650 4756 generic.go:334] "Generic (PLEG): container finished" podID="8083f569-75ed-42a3-aee0-3590a86f4329" containerID="03aa89653658ac53b5633071489d6b40863faafd9078e485bf1161c1cc83e6a8" exitCode=0 Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.778740 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" event={"ID":"8083f569-75ed-42a3-aee0-3590a86f4329","Type":"ContainerDied","Data":"03aa89653658ac53b5633071489d6b40863faafd9078e485bf1161c1cc83e6a8"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.778777 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" event={"ID":"8083f569-75ed-42a3-aee0-3590a86f4329","Type":"ContainerStarted","Data":"625342c86e7043afa699203fb90e873fbb80627c85dd96d4ba7c0983a4b4a33b"} Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.781268 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.789590 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.792299 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv"] Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.799760 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:00 crc kubenswrapper[4756]: W0224 00:08:00.840929 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2edfb46_22d6_4708_ab98_7e51b058dc0c.slice/crio-a1dbaec17aad9a1f79cb09b087a528ef64fd78b79937654c8d6016b44fe82a07 WatchSource:0}: Error finding container a1dbaec17aad9a1f79cb09b087a528ef64fd78b79937654c8d6016b44fe82a07: Status 404 returned error can't find the container with id a1dbaec17aad9a1f79cb09b087a528ef64fd78b79937654c8d6016b44fe82a07 Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.855809 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.855995 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.3559666 +0000 UTC m=+138.266829233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.856338 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.857422 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.35740434 +0000 UTC m=+138.268266973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: W0224 00:08:00.881890 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02033419_96e3_46ba_a650_d528bf50492e.slice/crio-49869b486181b71dc8fdad0da5a1172ab5c532f940374cd82c861d8dae83817e WatchSource:0}: Error finding container 49869b486181b71dc8fdad0da5a1172ab5c532f940374cd82c861d8dae83817e: Status 404 returned error can't find the container with id 49869b486181b71dc8fdad0da5a1172ab5c532f940374cd82c861d8dae83817e Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.959535 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:00 crc kubenswrapper[4756]: E0224 00:08:00.959956 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.459934268 +0000 UTC m=+138.370796901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:00 crc kubenswrapper[4756]: I0224 00:08:00.985564 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.045921 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.061142 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.061502 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.561489033 +0000 UTC m=+138.472351666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.075427 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.130155 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.140773 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.162834 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.163252 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.663228954 +0000 UTC m=+138.574091577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.165708 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dm5nw"] Feb 24 00:08:01 crc kubenswrapper[4756]: W0224 00:08:01.219446 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f8b74a7_cfe2_4675_ad1c_3990da3d401c.slice/crio-dc871d1e7a031a04a9f115be40593cb3dd8f4be1312780a88284d0262059bf71 WatchSource:0}: Error finding container dc871d1e7a031a04a9f115be40593cb3dd8f4be1312780a88284d0262059bf71: Status 404 returned error can't find the container with id dc871d1e7a031a04a9f115be40593cb3dd8f4be1312780a88284d0262059bf71 Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.265744 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.266246 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.766224948 +0000 UTC m=+138.677087581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.330770 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.368154 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.368549 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.868509118 +0000 UTC m=+138.779371751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.481980 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.482422 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:01.982407878 +0000 UTC m=+138.893270501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.534642 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.587925 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.588190 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.088169288 +0000 UTC m=+138.999031921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.588234 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.588573 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.088567532 +0000 UTC m=+138.999430165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.593549 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-q8m47"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.626286 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-d7npx"] Feb 24 00:08:01 crc kubenswrapper[4756]: W0224 00:08:01.627596 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod454c827a_ccb7_4b43_ac8a_bd1b38e77616.slice/crio-d317fa86c9094b45cbdeebbb7d37eb855b97ba15e139e3bf78af0cd494ac09e3 WatchSource:0}: Error finding container d317fa86c9094b45cbdeebbb7d37eb855b97ba15e139e3bf78af0cd494ac09e3: Status 404 returned error can't find the container with id d317fa86c9094b45cbdeebbb7d37eb855b97ba15e139e3bf78af0cd494ac09e3 Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.628922 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7qp8v"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.644685 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.689465 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.690269 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.190246121 +0000 UTC m=+139.101108754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.777777 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.795274 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.795730 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.29571589 +0000 UTC m=+139.206578523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.811725 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29531520-pdn56" podStartSLOduration=78.811702342 podStartE2EDuration="1m18.811702342s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:01.810549752 +0000 UTC m=+138.721412385" watchObservedRunningTime="2026-02-24 00:08:01.811702342 +0000 UTC m=+138.722564975" Feb 24 00:08:01 crc kubenswrapper[4756]: W0224 00:08:01.825000 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod551919c0_dd7a_4002_97f9_29f554d75179.slice/crio-5e0cb040991d4c9acd9846d76c95009f24d00fe9f6b9b0a168be0a3423a54488 WatchSource:0}: Error finding container 5e0cb040991d4c9acd9846d76c95009f24d00fe9f6b9b0a168be0a3423a54488: Status 404 returned error can't find the container with id 5e0cb040991d4c9acd9846d76c95009f24d00fe9f6b9b0a168be0a3423a54488 Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.825191 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" event={"ID":"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc","Type":"ContainerStarted","Data":"3f3ac7e075e642056fba3669ce7a6e2a54c271223a0003147590f1751bb84f77"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.833370 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" event={"ID":"89fb1442-f37a-40d6-ab9c-34e8351d2cdb","Type":"ContainerStarted","Data":"00b3ea85fef895af5082a4f2529410627c54a643cc80cd35a4075abd58727235"} Feb 24 00:08:01 crc kubenswrapper[4756]: W0224 00:08:01.890102 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf577e511_2dda_45d6_912e_80c6e01f4ab5.slice/crio-0d1edb06880a51de42ed4797fcb5d921a7a71341da44e2c01f57c7e9995fb53c WatchSource:0}: Error finding container 0d1edb06880a51de42ed4797fcb5d921a7a71341da44e2c01f57c7e9995fb53c: Status 404 returned error can't find the container with id 0d1edb06880a51de42ed4797fcb5d921a7a71341da44e2c01f57c7e9995fb53c Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.900037 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:01 crc kubenswrapper[4756]: E0224 00:08:01.901900 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.401863942 +0000 UTC m=+139.312726755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907004 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2mxpf" event={"ID":"6a4a3180-9cf3-4efd-a188-0b011c91c1c9","Type":"ContainerStarted","Data":"cc4e81d798e47af7244bd6f30ef344220d565ace343fbb5f2c2630d07803e85a"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907089 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" event={"ID":"c7d56473-d779-43ac-a1a0-2924bab188f5","Type":"ContainerStarted","Data":"5acba35bb80bc3628fdea9f609e2f2ff0a949d00c13d2eaff038ea1bec094276"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907107 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" event={"ID":"5ecf9e21-1dc7-4523-a278-5edcefaccd3f","Type":"ContainerStarted","Data":"96077b7e7d6421855018f0f9b9b1b4ab608f869bfe48e39ff8b08656b76088af"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907119 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-522ht" event={"ID":"454c827a-ccb7-4b43-ac8a-bd1b38e77616","Type":"ContainerStarted","Data":"d317fa86c9094b45cbdeebbb7d37eb855b97ba15e139e3bf78af0cd494ac09e3"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907132 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" event={"ID":"7561977d-310e-468c-9240-58f94bfb8227","Type":"ContainerStarted","Data":"788920da8db429995c463e7c64f93dda257cb8b92d38dabd4b84b0b987aa136e"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907148 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" event={"ID":"e5cdd0fc-4966-4022-b2c0-3eb556f083b0","Type":"ContainerStarted","Data":"7abd525132751f20dc44d5560bed568ee428df4edeed87133123aba1a4d8a2ec"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907164 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4jtsc" event={"ID":"f79308a5-8560-4d7d-9180-92f05193a4ce","Type":"ContainerStarted","Data":"71346d03f6f962f4dc54a89a36a4b0e524d52805ccbbff863e280363c260bdbd"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907181 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-cvvhf"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907202 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" event={"ID":"02033419-96e3-46ba-a650-d528bf50492e","Type":"ContainerStarted","Data":"49869b486181b71dc8fdad0da5a1172ab5c532f940374cd82c861d8dae83817e"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907239 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-gdn5q"] Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907255 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" event={"ID":"c5de838f-c611-4eea-8e8c-51016e473942","Type":"ContainerStarted","Data":"3e8b400b16adf8f39ed0e584ecc09e1ef32e573592faf0c43929dcfd96f5012a"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.907279 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" event={"ID":"6f8b74a7-cfe2-4675-ad1c-3990da3d401c","Type":"ContainerStarted","Data":"dc871d1e7a031a04a9f115be40593cb3dd8f4be1312780a88284d0262059bf71"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.908606 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" event={"ID":"d14f4a93-e48b-4b1a-aa50-89e55755f479","Type":"ContainerStarted","Data":"ee2c59a15fb01f03e836d146d3c939a37cdfa80749d7b695684ead3bb00a0d3b"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.937758 4756 generic.go:334] "Generic (PLEG): container finished" podID="e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2" containerID="cf56c6909adffc7d072fa1bdc12b813b5a0895615812ebe868cd1813219ee29f" exitCode=0 Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.937827 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" event={"ID":"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2","Type":"ContainerDied","Data":"cf56c6909adffc7d072fa1bdc12b813b5a0895615812ebe868cd1813219ee29f"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.960920 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" event={"ID":"04c98116-2117-4e89-ac51-cf5ee7e8df6b","Type":"ContainerStarted","Data":"412463c8d6ab13e49dd74641b0b65d530340ca610cd3743e914c138231d42477"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.964700 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" event={"ID":"044b4398-5b40-4f37-9ced-33b966b391df","Type":"ContainerStarted","Data":"cbd5ce799ebd3598edfa5879388fb99c43181e6d0759320744416a076bad40bf"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.966590 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" event={"ID":"1f20c38d-f4f0-47ea-a755-54328dc8fa90","Type":"ContainerStarted","Data":"11e5bfffe74e215fdc201b34124a4f9253af63ebe001415f5943fb2fbf2483fb"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.969909 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" event={"ID":"7f127f66-2445-4615-8949-1a2e70a902c0","Type":"ContainerStarted","Data":"5973937f814a6530f5e923a4841e1517b0b86257895ab76483224f20c549052b"} Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.974437 4756 generic.go:334] "Generic (PLEG): container finished" podID="22582027-e646-48aa-a5f5-7fb50f199830" containerID="28551346d37919e95e9b411a741330390bd892a88cd568911ddaa968965cb578" exitCode=0 Feb 24 00:08:01 crc kubenswrapper[4756]: I0224 00:08:01.974773 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" event={"ID":"22582027-e646-48aa-a5f5-7fb50f199830","Type":"ContainerDied","Data":"28551346d37919e95e9b411a741330390bd892a88cd568911ddaa968965cb578"} Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.002032 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" event={"ID":"09d326ee-0878-4f58-b935-9405228e3008","Type":"ContainerStarted","Data":"50392d17b1e0e8bbbb056ae771706d03c440b7e509d0962b20acac85bce6813d"} Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.002828 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.017797 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.517777993 +0000 UTC m=+139.428640626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.040544 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vv4vz" podStartSLOduration=79.040511047 podStartE2EDuration="1m19.040511047s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.031240767 +0000 UTC m=+138.942103410" watchObservedRunningTime="2026-02-24 00:08:02.040511047 +0000 UTC m=+138.951373690" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.045231 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f6x7l" event={"ID":"f2edfb46-22d6-4708-ab98-7e51b058dc0c","Type":"ContainerStarted","Data":"63bde2d7d1311e994f602d3987ebfab365d13e4435dd16f3c54e3154fca4ab12"} Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.045311 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f6x7l" event={"ID":"f2edfb46-22d6-4708-ab98-7e51b058dc0c","Type":"ContainerStarted","Data":"a1dbaec17aad9a1f79cb09b087a528ef64fd78b79937654c8d6016b44fe82a07"} Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.045881 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.046938 4756 csr.go:261] certificate signing request csr-46lm8 is approved, waiting to be issued Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.047953 4756 patch_prober.go:28] interesting pod/console-operator-58897d9998-f6x7l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.047992 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-f6x7l" podUID="f2edfb46-22d6-4708-ab98-7e51b058dc0c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.052078 4756 csr.go:257] certificate signing request csr-46lm8 is issued Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.056648 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-24rzj" event={"ID":"735c2ab3-4d91-4906-81f6-77224425b729","Type":"ContainerStarted","Data":"8e01bb6c65d3ab5741ab3ea414fc983657686b6749592599de686190497a80e9"} Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.057297 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-24rzj" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.060263 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-24rzj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.060307 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-24rzj" podUID="735c2ab3-4d91-4906-81f6-77224425b729" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.061225 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" event={"ID":"7eb6916d-1721-48bf-b67b-00ccc0144871","Type":"ContainerStarted","Data":"a1aa4e61479551f0520f11be5aa37e611f275f08a88dc5cb27c0d9db0d82a50b"} Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.062717 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.063637 4756 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-xrpb6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.063679 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" podUID="7eb6916d-1721-48bf-b67b-00ccc0144871" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.063765 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" event={"ID":"42c3019f-04ad-4a2e-af4d-411b5c0c9392","Type":"ContainerStarted","Data":"94d9ef7f3217fcbfec8e088c6a89c0af4af30bcfaedb6e2d2b4ecae3c15a2cba"} Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.073509 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" podStartSLOduration=78.073487475 podStartE2EDuration="1m18.073487475s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.069013941 +0000 UTC m=+138.979876574" watchObservedRunningTime="2026-02-24 00:08:02.073487475 +0000 UTC m=+138.984350108" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.077941 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.103665 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.104899 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.604869758 +0000 UTC m=+139.515732391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.153497 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vllpl" podStartSLOduration=78.153475965 podStartE2EDuration="1m18.153475965s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.152552444 +0000 UTC m=+139.063415077" watchObservedRunningTime="2026-02-24 00:08:02.153475965 +0000 UTC m=+139.064338598" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.229342 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.234088 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.734051416 +0000 UTC m=+139.644914039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.254756 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-dhtw6"] Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.270537 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" podStartSLOduration=79.270505074 podStartE2EDuration="1m19.270505074s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.270435272 +0000 UTC m=+139.181297905" watchObservedRunningTime="2026-02-24 00:08:02.270505074 +0000 UTC m=+139.181367707" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.309385 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6tzzt" podStartSLOduration=79.309361955 podStartE2EDuration="1m19.309361955s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.308015738 +0000 UTC m=+139.218878371" watchObservedRunningTime="2026-02-24 00:08:02.309361955 +0000 UTC m=+139.220224578" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.331243 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.331599 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.831576752 +0000 UTC m=+139.742439385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.427830 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl"] Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.432497 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.432957 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:02.93294165 +0000 UTC m=+139.843804283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.435785 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4jtsc" podStartSLOduration=79.435766897 podStartE2EDuration="1m19.435766897s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.433657914 +0000 UTC m=+139.344520557" watchObservedRunningTime="2026-02-24 00:08:02.435766897 +0000 UTC m=+139.346629530" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.487162 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" podStartSLOduration=78.48714104 podStartE2EDuration="1m18.48714104s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.483866527 +0000 UTC m=+139.394729160" watchObservedRunningTime="2026-02-24 00:08:02.48714104 +0000 UTC m=+139.398003673" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.487207 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-x4zw8"] Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.500664 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qrnfg"] Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.514530 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j7xwn"] Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.549496 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.551188 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.051159549 +0000 UTC m=+139.962022182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.566715 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.568218 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.068189537 +0000 UTC m=+139.979052170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.577777 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-f6x7l" podStartSLOduration=79.577743767 podStartE2EDuration="1m19.577743767s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.56914436 +0000 UTC m=+139.480007003" watchObservedRunningTime="2026-02-24 00:08:02.577743767 +0000 UTC m=+139.488606420" Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.667806 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.668336 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.168301922 +0000 UTC m=+140.079164545 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.769388 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.769875 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.269853536 +0000 UTC m=+140.180716169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.870438 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.870658 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.370599463 +0000 UTC m=+140.281462096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.879468 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.880160 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.380143422 +0000 UTC m=+140.291006055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:02 crc kubenswrapper[4756]: I0224 00:08:02.980458 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:02 crc kubenswrapper[4756]: E0224 00:08:02.980994 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.480957641 +0000 UTC m=+140.391820274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.054535 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 00:03:02 +0000 UTC, rotation deadline is 2026-12-19 06:00:54.74248004 +0000 UTC Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.054581 4756 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7157h52m51.687902408s for next certificate rotation Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.082839 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.083245 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.583233121 +0000 UTC m=+140.494095754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.086443 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrnfg" event={"ID":"b3c7db43-a972-4077-94c9-d5d26f9bfabc","Type":"ContainerStarted","Data":"c93ed0acae0f289c97b86ca75f0b353b5a21c1a4165607f83a0da1de5cff60e1"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.089178 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" event={"ID":"04c98116-2117-4e89-ac51-cf5ee7e8df6b","Type":"ContainerStarted","Data":"a519453b85bc6267b022f5c7201a473665e1dfef096c4bdb231ad1a49290f2f6"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.090454 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" event={"ID":"f577e511-2dda-45d6-912e-80c6e01f4ab5","Type":"ContainerStarted","Data":"0d1edb06880a51de42ed4797fcb5d921a7a71341da44e2c01f57c7e9995fb53c"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.091233 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" event={"ID":"11fea2b9-4369-4534-8101-5fc365d29723","Type":"ContainerStarted","Data":"4e30000b918fd7552256f6e7977960c65680d28d48504888bd6874190080438d"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.092223 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" event={"ID":"ba791051-d526-439d-839b-7b84623e52f1","Type":"ContainerStarted","Data":"0f99969ebd8743fa1a617aa0482aee498c423acc7dff56a8ab1ea142f8dfa66c"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.094263 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" event={"ID":"e5cdd0fc-4966-4022-b2c0-3eb556f083b0","Type":"ContainerStarted","Data":"84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.094614 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.096120 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" event={"ID":"482daac8-f9ac-43d2-a101-cf64acefa9d3","Type":"ContainerStarted","Data":"a85927e161510d14c992cf172b91953dfef4b093b538663cd8035d578c71cc7e"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.096575 4756 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5qkbl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.096619 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" podUID="e5cdd0fc-4966-4022-b2c0-3eb556f083b0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.097945 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-522ht" event={"ID":"454c827a-ccb7-4b43-ac8a-bd1b38e77616","Type":"ContainerStarted","Data":"22fe80d1c7ed21675913492285163d76e86776607280930f1c88cc3e34d537ad"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.104490 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" event={"ID":"7561977d-310e-468c-9240-58f94bfb8227","Type":"ContainerStarted","Data":"15a24f46b4a5dcc0c3c6f9426e4093ce5d9e28fd3268d99b81101b71c197a21d"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.105478 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" event={"ID":"0707cda5-590e-4bc2-a966-970f98336869","Type":"ContainerStarted","Data":"5b9173bc75e3bcef4f903bd35d03234c0c390d5b560cc1182690e23347309cb1"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.107383 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" event={"ID":"94c58783-dca0-46ea-8aae-09314c370046","Type":"ContainerStarted","Data":"1b2b4c25152c6917271faed75a883c8c432200ea349331b6398a7b2097664eb8"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.112279 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-24rzj" podStartSLOduration=80.112250392 podStartE2EDuration="1m20.112250392s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:02.608440016 +0000 UTC m=+139.519302659" watchObservedRunningTime="2026-02-24 00:08:03.112250392 +0000 UTC m=+140.023113025" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.112420 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zkb66" podStartSLOduration=79.112415248 podStartE2EDuration="1m19.112415248s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.109083893 +0000 UTC m=+140.019946526" watchObservedRunningTime="2026-02-24 00:08:03.112415248 +0000 UTC m=+140.023277881" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.143429 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-522ht" podStartSLOduration=79.143409377 podStartE2EDuration="1m19.143409377s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.141223012 +0000 UTC m=+140.052085665" watchObservedRunningTime="2026-02-24 00:08:03.143409377 +0000 UTC m=+140.054272010" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.162143 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d7npx" event={"ID":"db8ac651-47dc-4e2c-97e0-35c3d5fa3b71","Type":"ContainerStarted","Data":"af66526c082b5c4b663e6dff114efcf5117c40aa4b5bb2bfcb35f0b50d5e255e"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.162194 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-d7npx" event={"ID":"db8ac651-47dc-4e2c-97e0-35c3d5fa3b71","Type":"ContainerStarted","Data":"950e750ab33e83566e910d6bfa64f279a42fe0305fc3cd1e4f954ef5a2090175"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.171668 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" event={"ID":"42c3019f-04ad-4a2e-af4d-411b5c0c9392","Type":"ContainerStarted","Data":"ccf93a2e1644a5e3a0f584489d3176c81b3b9a65d3e842151154ba7206b52f64"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.187093 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.187448 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.687403246 +0000 UTC m=+140.598265879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.187774 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.190019 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.689999865 +0000 UTC m=+140.600862498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.196566 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" event={"ID":"74fce8ed-7252-4b7c-96cb-e1e217504825","Type":"ContainerStarted","Data":"3440fc57aab0849272d912649fa0148d8819a43e056395629707626bfa75832c"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.215799 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" event={"ID":"89fb1442-f37a-40d6-ab9c-34e8351d2cdb","Type":"ContainerStarted","Data":"3f364390206793befba18a79d093cbb1cfc59873f5a2f95ba436580f0574f4a7"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.217026 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.222448 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" event={"ID":"c7d56473-d779-43ac-a1a0-2924bab188f5","Type":"ContainerStarted","Data":"c2fca5e7488a0c8f1726f0eaa4960bf18719f3e119eb6f93258171163fcb0df8"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.222954 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" podStartSLOduration=80.222930972 podStartE2EDuration="1m20.222930972s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.219626168 +0000 UTC m=+140.130488811" watchObservedRunningTime="2026-02-24 00:08:03.222930972 +0000 UTC m=+140.133793605" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.224448 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" event={"ID":"551919c0-dd7a-4002-97f9-29f554d75179","Type":"ContainerStarted","Data":"6ddbdeca0d535253e0dba9f91555873ea881cca2b4dfe303c74b19115a0dd053"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.224487 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" event={"ID":"551919c0-dd7a-4002-97f9-29f554d75179","Type":"ContainerStarted","Data":"5e0cb040991d4c9acd9846d76c95009f24d00fe9f6b9b0a168be0a3423a54488"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.231762 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" event={"ID":"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc","Type":"ContainerStarted","Data":"6360df40beeb2cdf1ba9ed4a83fe764a3fc48ca53e1263c0f4852804370d50dc"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.231797 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" event={"ID":"549cfa5e-16b2-4bd2-8b1a-c4d9e94f0dcc","Type":"ContainerStarted","Data":"f17939a8bc5832f64ccaba750a73a33ffe8c68467865d9cf5f437c86e6cc8ada"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.240613 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.245556 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r9bdf" podStartSLOduration=79.245532732 podStartE2EDuration="1m19.245532732s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.244813157 +0000 UTC m=+140.155675780" watchObservedRunningTime="2026-02-24 00:08:03.245532732 +0000 UTC m=+140.156395365" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.276691 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-2n9c6" podStartSLOduration=79.276666116 podStartE2EDuration="1m19.276666116s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.274045946 +0000 UTC m=+140.184908579" watchObservedRunningTime="2026-02-24 00:08:03.276666116 +0000 UTC m=+140.187528749" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.288531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" event={"ID":"d14f4a93-e48b-4b1a-aa50-89e55755f479","Type":"ContainerStarted","Data":"de0bdbf038ca47ad3dce78fec64279c6dfab50bb8e732e3bcefeedf52bcb9edc"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.289536 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.291039 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.791011931 +0000 UTC m=+140.701874564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.303946 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" event={"ID":"5143125e-0b6e-417a-92de-e9b0c641713a","Type":"ContainerStarted","Data":"2484a9b005426a0a1c5abe49bac137a89a3b2b85c48e3abebe1c512941b38fc1"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.317618 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-d7npx" podStartSLOduration=6.317585038 podStartE2EDuration="6.317585038s" podCreationTimestamp="2026-02-24 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.316725348 +0000 UTC m=+140.227587981" watchObservedRunningTime="2026-02-24 00:08:03.317585038 +0000 UTC m=+140.228447671" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.350929 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" event={"ID":"8ec53f8e-ccec-4f40-ba05-50816a70be2e","Type":"ContainerStarted","Data":"b9faba147a1feacf15f199acd34adf9af30a4f72379ef2c4f7e5eb336efadd16"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.352152 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.356042 4756 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bsdsk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.356121 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" podUID="8ec53f8e-ccec-4f40-ba05-50816a70be2e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.359799 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" event={"ID":"02033419-96e3-46ba-a650-d528bf50492e","Type":"ContainerStarted","Data":"7a20ccc22005f8c820b886532eac997b3ae62028340e8be3d8b2e561f8e5b3da"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.391985 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.396414 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.896387587 +0000 UTC m=+140.807250220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.455565 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" event={"ID":"6f8b74a7-cfe2-4675-ad1c-3990da3d401c","Type":"ContainerStarted","Data":"fe2c4f9f3509bc5e44805916debf343a0dcec5824c975d1fbfa8f583463ec31a"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.456993 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.461743 4756 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5lvff container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.461809 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" podUID="6f8b74a7-cfe2-4675-ad1c-3990da3d401c" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.463713 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" event={"ID":"5ecf9e21-1dc7-4523-a278-5edcefaccd3f","Type":"ContainerStarted","Data":"b8a5ae8b7714cd9c28376cd9d261424929f3c935ace121164d869cc84bd07fa8"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.497146 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.498096 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:03.998079007 +0000 UTC m=+140.908941640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.498456 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-q8m47" podStartSLOduration=79.498428699 podStartE2EDuration="1m19.498428699s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.398928085 +0000 UTC m=+140.309790728" watchObservedRunningTime="2026-02-24 00:08:03.498428699 +0000 UTC m=+140.409291332" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.519629 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" event={"ID":"7f127f66-2445-4615-8949-1a2e70a902c0","Type":"ContainerStarted","Data":"7780e07a131baf30dd86e7bdffd9693ab2de41060e6eaecb8eb33a02d9c80775"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.567158 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" event={"ID":"f19c9286-752d-420e-83bb-010eefd59ea1","Type":"ContainerStarted","Data":"8b8cad27ed949c46ffffc282530add14aa362ad3037ffc819e1a7a6cb71fa87f"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.607967 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" event={"ID":"1f20c38d-f4f0-47ea-a755-54328dc8fa90","Type":"ContainerStarted","Data":"7a75d9c352d5195ed54a236f7b6546852b999b288757800df4f610c040c76c01"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.608706 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwmkd" podStartSLOduration=79.608669923 podStartE2EDuration="1m19.608669923s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.533025343 +0000 UTC m=+140.443887986" watchObservedRunningTime="2026-02-24 00:08:03.608669923 +0000 UTC m=+140.519532576" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.611366 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jskfz" podStartSLOduration=80.611347986 podStartE2EDuration="1m20.611347986s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.499887399 +0000 UTC m=+140.410750032" watchObservedRunningTime="2026-02-24 00:08:03.611347986 +0000 UTC m=+140.522210629" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.629475 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.631851 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.131828272 +0000 UTC m=+141.042690905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.648244 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" podStartSLOduration=79.648210948 podStartE2EDuration="1m19.648210948s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.603107191 +0000 UTC m=+140.513969824" watchObservedRunningTime="2026-02-24 00:08:03.648210948 +0000 UTC m=+140.559073581" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.699802 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-4k4gq" podStartSLOduration=79.699784167 podStartE2EDuration="1m19.699784167s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.697140596 +0000 UTC m=+140.608003239" watchObservedRunningTime="2026-02-24 00:08:03.699784167 +0000 UTC m=+140.610646810" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.703093 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" event={"ID":"8083f569-75ed-42a3-aee0-3590a86f4329","Type":"ContainerStarted","Data":"decdd5af0df19358a077698e84f7cb3ce3b7773e36143e49033a3413a243844e"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.732904 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" event={"ID":"044b4398-5b40-4f37-9ced-33b966b391df","Type":"ContainerStarted","Data":"afcac3f4c9c7663ccbbc55ee50871338f6b17d2ad590aa77c6f903140279191b"} Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.735364 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-24rzj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.735440 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-24rzj" podUID="735c2ab3-4d91-4906-81f6-77224425b729" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.735965 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.736362 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.236342909 +0000 UTC m=+141.147205542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.736549 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.736951 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.236935859 +0000 UTC m=+141.147798492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.752526 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.780166 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" podStartSLOduration=79.78014466 podStartE2EDuration="1m19.78014466s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.778548255 +0000 UTC m=+140.689410888" watchObservedRunningTime="2026-02-24 00:08:03.78014466 +0000 UTC m=+140.691007293" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.805023 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.816478 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:03 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:03 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:03 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.816530 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.841970 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.341934793 +0000 UTC m=+141.252797426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.841150 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.844325 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.845929 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-ppwx6" podStartSLOduration=79.845899679 podStartE2EDuration="1m19.845899679s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.840630027 +0000 UTC m=+140.751492660" watchObservedRunningTime="2026-02-24 00:08:03.845899679 +0000 UTC m=+140.756762322" Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.853429 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.353412099 +0000 UTC m=+141.264274732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.949355 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:03 crc kubenswrapper[4756]: E0224 00:08:03.949820 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.449799735 +0000 UTC m=+141.360662368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:03 crc kubenswrapper[4756]: I0224 00:08:03.951112 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" podStartSLOduration=79.951082859 podStartE2EDuration="1m19.951082859s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:03.911962339 +0000 UTC m=+140.822824982" watchObservedRunningTime="2026-02-24 00:08:03.951082859 +0000 UTC m=+140.861945492" Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.047731 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-f6x7l" Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.052656 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.053114 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.55310236 +0000 UTC m=+141.463964993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.154419 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.154982 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.654956745 +0000 UTC m=+141.565819508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.256632 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.257107 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.757093519 +0000 UTC m=+141.667956152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.359785 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.360240 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.860204538 +0000 UTC m=+141.771067171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.360330 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.360958 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.860935903 +0000 UTC m=+141.771798536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.462031 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.463029 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.962978944 +0000 UTC m=+141.873841577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.470107 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.470813 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:04.970794324 +0000 UTC m=+141.881656957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.574121 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.574783 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.074749491 +0000 UTC m=+141.985612124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.676254 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.676731 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.17670805 +0000 UTC m=+142.087570683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.747019 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" event={"ID":"482daac8-f9ac-43d2-a101-cf64acefa9d3","Type":"ContainerStarted","Data":"b244c5e6863356f0e785c5a8b670a148fb5b39676b5fcd471730152a3a024b47"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.756886 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" event={"ID":"f577e511-2dda-45d6-912e-80c6e01f4ab5","Type":"ContainerStarted","Data":"4f40a46c5402c38dce9aa7ee7b3a7291e7345ca9b2b9c8d055dee64d0a6ef660"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.775709 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" event={"ID":"09d326ee-0878-4f58-b935-9405228e3008","Type":"ContainerStarted","Data":"2aca31fc4e9aab360942459f8f6d2c8ccf320add9c38c974a83c31bd7a068263"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.777015 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.777489 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.277472307 +0000 UTC m=+142.188334940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.797253 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:04 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:04 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:04 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.797615 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.798671 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" event={"ID":"e3b2bf7e-5f45-44a8-b3aa-7b905fb279b2","Type":"ContainerStarted","Data":"0c24d244c90d364ee0fbbaccb16c3e38719e5bc07d279f9ee5cd9612475a4d16"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.799494 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.815872 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" event={"ID":"5ecf9e21-1dc7-4523-a278-5edcefaccd3f","Type":"ContainerStarted","Data":"e522ef4624eaf26cacaf7ea848a347adca82aa0a6381bfdf04e676bd5727920d"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.830349 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" event={"ID":"74fce8ed-7252-4b7c-96cb-e1e217504825","Type":"ContainerStarted","Data":"c06ef1aa6e6e2f38e4cfc526102695af2e04d3c0e67a72548cbbf4a1788e732b"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.858379 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" event={"ID":"ba791051-d526-439d-839b-7b84623e52f1","Type":"ContainerStarted","Data":"f1b724dd7c0ae6e6c8d892cab8416f775b6741645d1b57c2fa5aeb3e84a9c2b5"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.881470 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7qp8v" podStartSLOduration=80.881447186 podStartE2EDuration="1m20.881447186s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:04.828322402 +0000 UTC m=+141.739185045" watchObservedRunningTime="2026-02-24 00:08:04.881447186 +0000 UTC m=+141.792309819" Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.882226 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" podStartSLOduration=81.882220592 podStartE2EDuration="1m21.882220592s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:04.881043172 +0000 UTC m=+141.791905815" watchObservedRunningTime="2026-02-24 00:08:04.882220592 +0000 UTC m=+141.793083215" Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.882472 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.884640 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.384624465 +0000 UTC m=+142.295487098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.891653 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" event={"ID":"044b4398-5b40-4f37-9ced-33b966b391df","Type":"ContainerStarted","Data":"49efd0170b6f622093bb340740219b6b7762bdd71575d6fdd2b28f8b4bdab345"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.925912 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zvbpv" podStartSLOduration=80.925888639 podStartE2EDuration="1m20.925888639s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:04.917155978 +0000 UTC m=+141.828018631" watchObservedRunningTime="2026-02-24 00:08:04.925888639 +0000 UTC m=+141.836751272" Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.931320 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" event={"ID":"94c58783-dca0-46ea-8aae-09314c370046","Type":"ContainerStarted","Data":"308fb22652f59499f629f6974498bb68c04793009129626dd34542ba13d8256c"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.931399 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" event={"ID":"94c58783-dca0-46ea-8aae-09314c370046","Type":"ContainerStarted","Data":"eb2f201bc251468adc152bd7770f75453f47aa54a34b5f6fe073299d6dd5bd2c"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.961101 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" event={"ID":"22582027-e646-48aa-a5f5-7fb50f199830","Type":"ContainerStarted","Data":"a82f95b4ed25b067f7d408f8ed56f0d830289b57e73233b91bc0a45dc22ea094"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.974475 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d8nhs" event={"ID":"5143125e-0b6e-417a-92de-e9b0c641713a","Type":"ContainerStarted","Data":"291414b972538ae859537f96a18cd4a970ccfeff68fda5850222d6d067898a9f"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.984013 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" event={"ID":"c7d56473-d779-43ac-a1a0-2924bab188f5","Type":"ContainerStarted","Data":"383088239f6293e5d80530dd40655076fb846ac361186d34af89d18e46e326ca"} Feb 24 00:08:04 crc kubenswrapper[4756]: I0224 00:08:04.984874 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:04 crc kubenswrapper[4756]: E0224 00:08:04.986904 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.486884064 +0000 UTC m=+142.397746697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.010032 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" event={"ID":"11fea2b9-4369-4534-8101-5fc365d29723","Type":"ContainerStarted","Data":"bf4d91725d152a98dab32da45384c9ff6fafeb9ae1ccbef27c9a3e7680af73e8"} Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.011398 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.012419 4756 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j7xwn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.012481 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" podUID="11fea2b9-4369-4534-8101-5fc365d29723" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.020866 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" event={"ID":"8ec53f8e-ccec-4f40-ba05-50816a70be2e","Type":"ContainerStarted","Data":"5a1b452f175696e946db9aae45f1e5d34e725d2b99c84197fbac322fba04dac0"} Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.036897 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" event={"ID":"7561977d-310e-468c-9240-58f94bfb8227","Type":"ContainerStarted","Data":"b7c1c72db376efc2e0d9ccf99d5295a260e6384197b5983b640b58ad676c3aab"} Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.063303 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2mxpf" event={"ID":"6a4a3180-9cf3-4efd-a188-0b011c91c1c9","Type":"ContainerStarted","Data":"dc66c98664712e3422e7247641918df77c2f94e03306cb9b503699c0f220ac0f"} Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.086867 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.092572 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.592548811 +0000 UTC m=+142.503411634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.103590 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" event={"ID":"8083f569-75ed-42a3-aee0-3590a86f4329","Type":"ContainerStarted","Data":"e677e0f308f3b1031f6487ab533c73ff2267b8b98db1ef7c7a60059429a38ae8"} Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.146401 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" event={"ID":"1f20c38d-f4f0-47ea-a755-54328dc8fa90","Type":"ContainerStarted","Data":"50b1254e982b6ee25a8ac99907dee1cf85114a6eccec8c56d2b07ba70066a41c"} Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.147242 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.165595 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" event={"ID":"0707cda5-590e-4bc2-a966-970f98336869","Type":"ContainerStarted","Data":"b009fe768a6cbfeaeba4c1286d14b90a6b190ca54f670a68c04ac8e67aeacc33"} Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.194905 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-tznzq" podStartSLOduration=81.194879102 podStartE2EDuration="1m21.194879102s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.100445103 +0000 UTC m=+142.011307736" watchObservedRunningTime="2026-02-24 00:08:05.194879102 +0000 UTC m=+142.105741735" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.210170 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.211918 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.711899829 +0000 UTC m=+142.622762462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.226758 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrnfg" event={"ID":"b3c7db43-a972-4077-94c9-d5d26f9bfabc","Type":"ContainerStarted","Data":"11761a58326fecfb661e0f335722977ee0cd10ec66c0716f342b4df9058c3b87"} Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.294597 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5lvff" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.296045 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-x4zw8" podStartSLOduration=81.296033173 podStartE2EDuration="1m21.296033173s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.204769683 +0000 UTC m=+142.115632316" watchObservedRunningTime="2026-02-24 00:08:05.296033173 +0000 UTC m=+142.206895816" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.297503 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-z8lrn" podStartSLOduration=81.297498803 podStartE2EDuration="1m21.297498803s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.294772539 +0000 UTC m=+142.205635182" watchObservedRunningTime="2026-02-24 00:08:05.297498803 +0000 UTC m=+142.208361426" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.313408 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.314853 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.814832362 +0000 UTC m=+142.725695185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.426882 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.427315 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:05.927287432 +0000 UTC m=+142.838150075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.442423 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" podStartSLOduration=81.442401684 podStartE2EDuration="1m21.442401684s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.44084083 +0000 UTC m=+142.351703463" watchObservedRunningTime="2026-02-24 00:08:05.442401684 +0000 UTC m=+142.353264317" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.443672 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9nrzl" podStartSLOduration=81.443667158 podStartE2EDuration="1m21.443667158s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.379519324 +0000 UTC m=+142.290381957" watchObservedRunningTime="2026-02-24 00:08:05.443667158 +0000 UTC m=+142.354529781" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.530214 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.530595 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.030578356 +0000 UTC m=+142.941440989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.580696 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" podStartSLOduration=81.580672674 podStartE2EDuration="1m21.580672674s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.515451424 +0000 UTC m=+142.426314057" watchObservedRunningTime="2026-02-24 00:08:05.580672674 +0000 UTC m=+142.491535307" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.629532 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b246x" podStartSLOduration=81.62950956 podStartE2EDuration="1m21.62950956s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.583861544 +0000 UTC m=+142.494724187" watchObservedRunningTime="2026-02-24 00:08:05.62950956 +0000 UTC m=+142.540372193" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.631953 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.632395 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.132381949 +0000 UTC m=+143.043244582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.674318 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2mxpf" podStartSLOduration=8.674280205 podStartE2EDuration="8.674280205s" podCreationTimestamp="2026-02-24 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.631319482 +0000 UTC m=+142.542182115" watchObservedRunningTime="2026-02-24 00:08:05.674280205 +0000 UTC m=+142.585142838" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.678728 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dm5nw" podStartSLOduration=81.678699968 podStartE2EDuration="1m21.678699968s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.661870907 +0000 UTC m=+142.572733540" watchObservedRunningTime="2026-02-24 00:08:05.678699968 +0000 UTC m=+142.589562601" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.683423 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.713870 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" podStartSLOduration=81.713845671 podStartE2EDuration="1m21.713845671s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.710705142 +0000 UTC m=+142.621567775" watchObservedRunningTime="2026-02-24 00:08:05.713845671 +0000 UTC m=+142.624708304" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.733270 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.733688 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.233673835 +0000 UTC m=+143.144536458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.772681 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" podStartSLOduration=82.7726578 podStartE2EDuration="1m22.7726578s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.769550853 +0000 UTC m=+142.680413486" watchObservedRunningTime="2026-02-24 00:08:05.7726578 +0000 UTC m=+142.683520433" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.796831 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:05 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:05 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:05 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.796905 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.834806 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.835292 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.33524135 +0000 UTC m=+143.246103983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.835506 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.836039 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.336023247 +0000 UTC m=+143.246885880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.931619 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qrnfg" podStartSLOduration=8.931596075 podStartE2EDuration="8.931596075s" podCreationTimestamp="2026-02-24 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.870930361 +0000 UTC m=+142.781792984" watchObservedRunningTime="2026-02-24 00:08:05.931596075 +0000 UTC m=+142.842458698" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.931883 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" podStartSLOduration=81.931879014 podStartE2EDuration="1m21.931879014s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:05.930112104 +0000 UTC m=+142.840974737" watchObservedRunningTime="2026-02-24 00:08:05.931879014 +0000 UTC m=+142.842741637" Feb 24 00:08:05 crc kubenswrapper[4756]: I0224 00:08:05.936713 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:05 crc kubenswrapper[4756]: E0224 00:08:05.937103 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.437058993 +0000 UTC m=+143.347921626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.024189 4756 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bsdsk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.024305 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" podUID="8ec53f8e-ccec-4f40-ba05-50816a70be2e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.038088 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.038502 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.538489303 +0000 UTC m=+143.449351936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.139759 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.140282 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.640230655 +0000 UTC m=+143.551093288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.241077 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.241512 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.741492099 +0000 UTC m=+143.652354732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.242214 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-dhtw6" podStartSLOduration=83.242201574 podStartE2EDuration="1m23.242201574s" podCreationTimestamp="2026-02-24 00:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:06.148406837 +0000 UTC m=+143.059269480" watchObservedRunningTime="2026-02-24 00:08:06.242201574 +0000 UTC m=+143.153064207" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.255512 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z94cj"] Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.258651 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.269707 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.274273 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" event={"ID":"482daac8-f9ac-43d2-a101-cf64acefa9d3","Type":"ContainerStarted","Data":"e37d650b3518451954e7c09d5b86caaec3dd1935410e8047eca0bc3af9772677"} Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.298597 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-cvvhf" event={"ID":"ba791051-d526-439d-839b-7b84623e52f1","Type":"ContainerStarted","Data":"531b07e13bf8e4883fa9195dfaa7e59bfce12719cedc5d5dd3220ee98828681d"} Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.326057 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z94cj"] Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.331839 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qrnfg" event={"ID":"b3c7db43-a972-4077-94c9-d5d26f9bfabc","Type":"ContainerStarted","Data":"72980ea1e3d6663cf56fc1e26d6fb9f94d00162f05baa8e2d6f0c74f3fccb37d"} Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.331889 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.336307 4756 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j7xwn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.336385 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" podUID="11fea2b9-4369-4534-8101-5fc365d29723" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.347830 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.348338 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-catalog-content\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.348424 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xrv\" (UniqueName: \"kubernetes.io/projected/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-kube-api-access-p5xrv\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.348470 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-utilities\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.348879 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.848852424 +0000 UTC m=+143.759715057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.406677 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-988lm" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.450256 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.450396 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-catalog-content\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.450654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5xrv\" (UniqueName: \"kubernetes.io/projected/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-kube-api-access-p5xrv\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.450762 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-utilities\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.453603 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-catalog-content\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.456315 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:06.956300672 +0000 UTC m=+143.867163305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.456666 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-utilities\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.471191 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t9hch"] Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.472458 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.478169 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.503697 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t9hch"] Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.530415 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5xrv\" (UniqueName: \"kubernetes.io/projected/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-kube-api-access-p5xrv\") pod \"certified-operators-z94cj\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.555930 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.556706 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-catalog-content\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.556758 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrr6\" (UniqueName: \"kubernetes.io/projected/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-kube-api-access-6zrr6\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.556790 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-utilities\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.556916 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.056895704 +0000 UTC m=+143.967758337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.561365 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bsdsk" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.604472 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.656790 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t7jz2"] Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.657885 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.659549 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-catalog-content\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.659620 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrr6\" (UniqueName: \"kubernetes.io/projected/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-kube-api-access-6zrr6\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.659654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-utilities\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.659685 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.660016 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.160002752 +0000 UTC m=+144.070865385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.660575 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-catalog-content\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.661246 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-utilities\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.681270 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7jz2"] Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.741989 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrr6\" (UniqueName: \"kubernetes.io/projected/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-kube-api-access-6zrr6\") pod \"community-operators-t9hch\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.760657 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.761042 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-catalog-content\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.761092 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2cvj\" (UniqueName: \"kubernetes.io/projected/789b9f46-f242-43cc-b598-a8571134461d-kube-api-access-t2cvj\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.761138 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-utilities\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.761335 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.261286017 +0000 UTC m=+144.172148660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.805533 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:06 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:06 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:06 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.805834 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.834472 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.843946 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gl4z4"] Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.858402 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.888698 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-catalog-content\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.888731 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2cvj\" (UniqueName: \"kubernetes.io/projected/789b9f46-f242-43cc-b598-a8571134461d-kube-api-access-t2cvj\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.888769 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.888790 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-utilities\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.896707 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-catalog-content\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.897040 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.397025492 +0000 UTC m=+144.307888115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.930804 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-utilities\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.967418 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2cvj\" (UniqueName: \"kubernetes.io/projected/789b9f46-f242-43cc-b598-a8571134461d-kube-api-access-t2cvj\") pod \"certified-operators-t7jz2\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.991424 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.991780 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-utilities\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.991942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-catalog-content\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:06 crc kubenswrapper[4756]: I0224 00:08:06.991984 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6pb\" (UniqueName: \"kubernetes.io/projected/5245508c-ca61-4b00-8909-21b4abc6458a-kube-api-access-7k6pb\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:06 crc kubenswrapper[4756]: E0224 00:08:06.992171 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.492145894 +0000 UTC m=+144.403008527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.003185 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.033397 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gl4z4"] Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.097131 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xrpb6"] Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.097812 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" podUID="7eb6916d-1721-48bf-b67b-00ccc0144871" containerName="controller-manager" containerID="cri-o://a1aa4e61479551f0520f11be5aa37e611f275f08a88dc5cb27c0d9db0d82a50b" gracePeriod=30 Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.100470 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.100620 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-catalog-content\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.100644 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k6pb\" (UniqueName: \"kubernetes.io/projected/5245508c-ca61-4b00-8909-21b4abc6458a-kube-api-access-7k6pb\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.100672 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-utilities\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.101202 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-utilities\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.101550 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.601533469 +0000 UTC m=+144.512396102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.101772 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-catalog-content\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.175955 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k6pb\" (UniqueName: \"kubernetes.io/projected/5245508c-ca61-4b00-8909-21b4abc6458a-kube-api-access-7k6pb\") pod \"community-operators-gl4z4\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.186077 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl"] Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.186462 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" podUID="344d0ffc-25ff-4503-a029-129a7e178a11" containerName="route-controller-manager" containerID="cri-o://1ed00cc7a370b9408d46aac49c835895425575614cf41fcd795bc7bc878c6342" gracePeriod=30 Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.203007 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.203485 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.703466717 +0000 UTC m=+144.614329350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.254994 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.280030 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z94cj"] Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.310107 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.310586 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.810573633 +0000 UTC m=+144.721436266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.389051 4756 generic.go:334] "Generic (PLEG): container finished" podID="7eb6916d-1721-48bf-b67b-00ccc0144871" containerID="a1aa4e61479551f0520f11be5aa37e611f275f08a88dc5cb27c0d9db0d82a50b" exitCode=0 Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.389561 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" event={"ID":"7eb6916d-1721-48bf-b67b-00ccc0144871","Type":"ContainerDied","Data":"a1aa4e61479551f0520f11be5aa37e611f275f08a88dc5cb27c0d9db0d82a50b"} Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.396690 4756 generic.go:334] "Generic (PLEG): container finished" podID="344d0ffc-25ff-4503-a029-129a7e178a11" containerID="1ed00cc7a370b9408d46aac49c835895425575614cf41fcd795bc7bc878c6342" exitCode=0 Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.396823 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" event={"ID":"344d0ffc-25ff-4503-a029-129a7e178a11","Type":"ContainerDied","Data":"1ed00cc7a370b9408d46aac49c835895425575614cf41fcd795bc7bc878c6342"} Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.411118 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.411627 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:07.911601889 +0000 UTC m=+144.822464522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.434019 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t9hch"] Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.448224 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" event={"ID":"482daac8-f9ac-43d2-a101-cf64acefa9d3","Type":"ContainerStarted","Data":"139a557a3b36818fc5e99fcef9e610fbc3f18c90429f3e9e8e02821023ee0401"} Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.496214 4756 generic.go:334] "Generic (PLEG): container finished" podID="f19c9286-752d-420e-83bb-010eefd59ea1" containerID="8b8cad27ed949c46ffffc282530add14aa362ad3037ffc819e1a7a6cb71fa87f" exitCode=0 Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.499145 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" event={"ID":"f19c9286-752d-420e-83bb-010eefd59ea1","Type":"ContainerDied","Data":"8b8cad27ed949c46ffffc282530add14aa362ad3037ffc819e1a7a6cb71fa87f"} Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.517224 4756 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-j7xwn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.517267 4756 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.517291 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" podUID="11fea2b9-4369-4534-8101-5fc365d29723" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.518618 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.518924 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:08.018910403 +0000 UTC m=+144.929773036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.620088 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.626269 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:08.126228206 +0000 UTC m=+145.037090839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.631944 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.635947 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:08.135925701 +0000 UTC m=+145.046788334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.725514 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t7jz2"] Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.734886 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.735712 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:08.235683333 +0000 UTC m=+145.146545966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: W0224 00:08:07.758349 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod789b9f46_f242_43cc_b598_a8571134461d.slice/crio-2cadb450952d6c309e5822d7d6a02955d789fa7e47ed7bbbad41e03c6c35f6f3 WatchSource:0}: Error finding container 2cadb450952d6c309e5822d7d6a02955d789fa7e47ed7bbbad41e03c6c35f6f3: Status 404 returned error can't find the container with id 2cadb450952d6c309e5822d7d6a02955d789fa7e47ed7bbbad41e03c6c35f6f3 Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.785916 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.800269 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:07 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:07 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:07 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.800339 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.803224 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gl4z4"] Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.836480 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.837020 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-24 00:08:08.3369977 +0000 UTC m=+145.247860333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-689hw" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.938056 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eb6916d-1721-48bf-b67b-00ccc0144871-serving-cert\") pod \"7eb6916d-1721-48bf-b67b-00ccc0144871\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.938194 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-config\") pod \"7eb6916d-1721-48bf-b67b-00ccc0144871\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.938237 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-client-ca\") pod \"7eb6916d-1721-48bf-b67b-00ccc0144871\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.938310 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ntc2\" (UniqueName: \"kubernetes.io/projected/7eb6916d-1721-48bf-b67b-00ccc0144871-kube-api-access-9ntc2\") pod \"7eb6916d-1721-48bf-b67b-00ccc0144871\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.938356 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-proxy-ca-bundles\") pod \"7eb6916d-1721-48bf-b67b-00ccc0144871\" (UID: \"7eb6916d-1721-48bf-b67b-00ccc0144871\") " Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.938487 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:07 crc kubenswrapper[4756]: E0224 00:08:07.938884 4756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-24 00:08:08.438863335 +0000 UTC m=+145.349725958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.939379 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7eb6916d-1721-48bf-b67b-00ccc0144871" (UID: "7eb6916d-1721-48bf-b67b-00ccc0144871"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.939363 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-client-ca" (OuterVolumeSpecName: "client-ca") pod "7eb6916d-1721-48bf-b67b-00ccc0144871" (UID: "7eb6916d-1721-48bf-b67b-00ccc0144871"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.939456 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-config" (OuterVolumeSpecName: "config") pod "7eb6916d-1721-48bf-b67b-00ccc0144871" (UID: "7eb6916d-1721-48bf-b67b-00ccc0144871"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.950609 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb6916d-1721-48bf-b67b-00ccc0144871-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7eb6916d-1721-48bf-b67b-00ccc0144871" (UID: "7eb6916d-1721-48bf-b67b-00ccc0144871"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:08:07 crc kubenswrapper[4756]: I0224 00:08:07.953086 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb6916d-1721-48bf-b67b-00ccc0144871-kube-api-access-9ntc2" (OuterVolumeSpecName: "kube-api-access-9ntc2") pod "7eb6916d-1721-48bf-b67b-00ccc0144871" (UID: "7eb6916d-1721-48bf-b67b-00ccc0144871"). InnerVolumeSpecName "kube-api-access-9ntc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.007762 4756 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-24T00:08:07.517288457Z","Handler":null,"Name":""} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.010916 4756 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.010964 4756 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.040823 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.040912 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.040936 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.040948 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ntc2\" (UniqueName: \"kubernetes.io/projected/7eb6916d-1721-48bf-b67b-00ccc0144871-kube-api-access-9ntc2\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.040961 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eb6916d-1721-48bf-b67b-00ccc0144871-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.040973 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eb6916d-1721-48bf-b67b-00ccc0144871-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.044274 4756 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.044329 4756 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.066528 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-689hw\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.141734 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.145622 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.148412 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.162650 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.243661 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/344d0ffc-25ff-4503-a029-129a7e178a11-serving-cert\") pod \"344d0ffc-25ff-4503-a029-129a7e178a11\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.243768 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9r8b\" (UniqueName: \"kubernetes.io/projected/344d0ffc-25ff-4503-a029-129a7e178a11-kube-api-access-t9r8b\") pod \"344d0ffc-25ff-4503-a029-129a7e178a11\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.243840 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-config\") pod \"344d0ffc-25ff-4503-a029-129a7e178a11\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.243956 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-client-ca\") pod \"344d0ffc-25ff-4503-a029-129a7e178a11\" (UID: \"344d0ffc-25ff-4503-a029-129a7e178a11\") " Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.244976 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-client-ca" (OuterVolumeSpecName: "client-ca") pod "344d0ffc-25ff-4503-a029-129a7e178a11" (UID: "344d0ffc-25ff-4503-a029-129a7e178a11"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.246316 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-config" (OuterVolumeSpecName: "config") pod "344d0ffc-25ff-4503-a029-129a7e178a11" (UID: "344d0ffc-25ff-4503-a029-129a7e178a11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.250876 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/344d0ffc-25ff-4503-a029-129a7e178a11-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "344d0ffc-25ff-4503-a029-129a7e178a11" (UID: "344d0ffc-25ff-4503-a029-129a7e178a11"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.251156 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/344d0ffc-25ff-4503-a029-129a7e178a11-kube-api-access-t9r8b" (OuterVolumeSpecName: "kube-api-access-t9r8b") pod "344d0ffc-25ff-4503-a029-129a7e178a11" (UID: "344d0ffc-25ff-4503-a029-129a7e178a11"). InnerVolumeSpecName "kube-api-access-t9r8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.345786 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9r8b\" (UniqueName: \"kubernetes.io/projected/344d0ffc-25ff-4503-a029-129a7e178a11-kube-api-access-t9r8b\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.346330 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.346347 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/344d0ffc-25ff-4503-a029-129a7e178a11-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.346361 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/344d0ffc-25ff-4503-a029-129a7e178a11-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.380573 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-689hw"] Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.427772 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2xptg"] Feb 24 00:08:08 crc kubenswrapper[4756]: E0224 00:08:08.428025 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb6916d-1721-48bf-b67b-00ccc0144871" containerName="controller-manager" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.428040 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb6916d-1721-48bf-b67b-00ccc0144871" containerName="controller-manager" Feb 24 00:08:08 crc kubenswrapper[4756]: E0224 00:08:08.428050 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344d0ffc-25ff-4503-a029-129a7e178a11" containerName="route-controller-manager" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.428057 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="344d0ffc-25ff-4503-a029-129a7e178a11" containerName="route-controller-manager" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.428186 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="344d0ffc-25ff-4503-a029-129a7e178a11" containerName="route-controller-manager" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.428209 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb6916d-1721-48bf-b67b-00ccc0144871" containerName="controller-manager" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.429002 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.431516 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.439015 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xptg"] Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.505888 4756 generic.go:334] "Generic (PLEG): container finished" podID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerID="860ce08ae32352a059da4922cd8e3f0d96160e638576a492bec0d32ba4e100d8" exitCode=0 Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.505939 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9hch" event={"ID":"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4","Type":"ContainerDied","Data":"860ce08ae32352a059da4922cd8e3f0d96160e638576a492bec0d32ba4e100d8"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.507219 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9hch" event={"ID":"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4","Type":"ContainerStarted","Data":"28e00606df81738f5d28d176766adf976b421e421eb54cb99623281ba24ade77"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.508839 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.513840 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" event={"ID":"7eb6916d-1721-48bf-b67b-00ccc0144871","Type":"ContainerDied","Data":"8c8aa9128f6d1fc38b00876a261ebf98be88b6bd08fe7dfec32345e00608e447"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.513915 4756 scope.go:117] "RemoveContainer" containerID="a1aa4e61479551f0520f11be5aa37e611f275f08a88dc5cb27c0d9db0d82a50b" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.514093 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-xrpb6" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.516831 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" event={"ID":"344d0ffc-25ff-4503-a029-129a7e178a11","Type":"ContainerDied","Data":"c46a212f5d2e47084513d248b3c27be94e4b28bbec866fd6b157a0caf402e8ac"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.516910 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.518286 4756 generic.go:334] "Generic (PLEG): container finished" podID="789b9f46-f242-43cc-b598-a8571134461d" containerID="caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164" exitCode=0 Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.518360 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7jz2" event={"ID":"789b9f46-f242-43cc-b598-a8571134461d","Type":"ContainerDied","Data":"caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.518392 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7jz2" event={"ID":"789b9f46-f242-43cc-b598-a8571134461d","Type":"ContainerStarted","Data":"2cadb450952d6c309e5822d7d6a02955d789fa7e47ed7bbbad41e03c6c35f6f3"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.524270 4756 generic.go:334] "Generic (PLEG): container finished" podID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerID="e239d8e1f262aa02bbbe72ea503ab4b7883f43dc4655c083f31ad0d5be1cbb04" exitCode=0 Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.524335 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z94cj" event={"ID":"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7","Type":"ContainerDied","Data":"e239d8e1f262aa02bbbe72ea503ab4b7883f43dc4655c083f31ad0d5be1cbb04"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.524358 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z94cj" event={"ID":"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7","Type":"ContainerStarted","Data":"ec2d399c28f6cafa0bba31dbbe45f0d60b250ab05e7fcb8b9a89f1b86934fda3"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.541377 4756 scope.go:117] "RemoveContainer" containerID="1ed00cc7a370b9408d46aac49c835895425575614cf41fcd795bc7bc878c6342" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.542958 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" event={"ID":"482daac8-f9ac-43d2-a101-cf64acefa9d3","Type":"ContainerStarted","Data":"94d6f3babd211011879e0db7b295599fbf72e552a4c851bac14168cbbb86dd97"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.549212 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-utilities\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.549315 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-catalog-content\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.549368 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnrp\" (UniqueName: \"kubernetes.io/projected/5584028b-2ad5-445c-b32a-0a342a022265-kube-api-access-klnrp\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.550660 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" event={"ID":"1edac0e9-6c20-41fe-83ad-6ade8001a0b9","Type":"ContainerStarted","Data":"a9ab4ac8eaa863c2c058cdf24dd900a2ed332b6141101149598ab8b9aa41643f"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.551475 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.555307 4756 generic.go:334] "Generic (PLEG): container finished" podID="5245508c-ca61-4b00-8909-21b4abc6458a" containerID="31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece" exitCode=0 Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.556333 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4z4" event={"ID":"5245508c-ca61-4b00-8909-21b4abc6458a","Type":"ContainerDied","Data":"31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.556365 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4z4" event={"ID":"5245508c-ca61-4b00-8909-21b4abc6458a","Type":"ContainerStarted","Data":"28f6a3adf3ed778ade14058abfbd40b8f56eb7a2b71e9b71f499115fcc481985"} Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.635379 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" podStartSLOduration=84.63535235 podStartE2EDuration="1m24.63535235s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:08.630544424 +0000 UTC m=+145.541407077" watchObservedRunningTime="2026-02-24 00:08:08.63535235 +0000 UTC m=+145.546214973" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.650236 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-utilities\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.650390 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-catalog-content\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.650431 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klnrp\" (UniqueName: \"kubernetes.io/projected/5584028b-2ad5-445c-b32a-0a342a022265-kube-api-access-klnrp\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.651571 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xrpb6"] Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.652252 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-catalog-content\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.652703 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-utilities\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.654334 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-xrpb6"] Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.664235 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl"] Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.669873 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zldwl"] Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.675613 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klnrp\" (UniqueName: \"kubernetes.io/projected/5584028b-2ad5-445c-b32a-0a342a022265-kube-api-access-klnrp\") pod \"redhat-marketplace-2xptg\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.698699 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-gdn5q" podStartSLOduration=11.698672165 podStartE2EDuration="11.698672165s" podCreationTimestamp="2026-02-24 00:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:08.694627836 +0000 UTC m=+145.605490469" watchObservedRunningTime="2026-02-24 00:08:08.698672165 +0000 UTC m=+145.609534818" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.745968 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.797823 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:08 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:08 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:08 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.797914 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.809455 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.842315 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5w42"] Feb 24 00:08:08 crc kubenswrapper[4756]: E0224 00:08:08.842665 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19c9286-752d-420e-83bb-010eefd59ea1" containerName="collect-profiles" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.842684 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19c9286-752d-420e-83bb-010eefd59ea1" containerName="collect-profiles" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.842818 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19c9286-752d-420e-83bb-010eefd59ea1" containerName="collect-profiles" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.843785 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.853729 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5w42"] Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.954083 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19c9286-752d-420e-83bb-010eefd59ea1-secret-volume\") pod \"f19c9286-752d-420e-83bb-010eefd59ea1\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.954211 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19c9286-752d-420e-83bb-010eefd59ea1-config-volume\") pod \"f19c9286-752d-420e-83bb-010eefd59ea1\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.954283 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fbhk\" (UniqueName: \"kubernetes.io/projected/f19c9286-752d-420e-83bb-010eefd59ea1-kube-api-access-5fbhk\") pod \"f19c9286-752d-420e-83bb-010eefd59ea1\" (UID: \"f19c9286-752d-420e-83bb-010eefd59ea1\") " Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.954565 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-utilities\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.954698 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5jd8\" (UniqueName: \"kubernetes.io/projected/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-kube-api-access-c5jd8\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.954757 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-catalog-content\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.957050 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19c9286-752d-420e-83bb-010eefd59ea1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f19c9286-752d-420e-83bb-010eefd59ea1" (UID: "f19c9286-752d-420e-83bb-010eefd59ea1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.964758 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19c9286-752d-420e-83bb-010eefd59ea1-kube-api-access-5fbhk" (OuterVolumeSpecName: "kube-api-access-5fbhk") pod "f19c9286-752d-420e-83bb-010eefd59ea1" (UID: "f19c9286-752d-420e-83bb-010eefd59ea1"). InnerVolumeSpecName "kube-api-access-5fbhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.971787 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19c9286-752d-420e-83bb-010eefd59ea1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f19c9286-752d-420e-83bb-010eefd59ea1" (UID: "f19c9286-752d-420e-83bb-010eefd59ea1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.990205 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 00:08:08 crc kubenswrapper[4756]: I0224 00:08:08.990911 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.000632 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.000840 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.000938 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.021143 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xptg"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.038348 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.039579 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.042375 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.044396 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.044458 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.044531 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.044573 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.044458 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.050683 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b57df8f7-vrtxl"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.052909 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.056398 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.056494 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.057644 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.057700 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.057797 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.057971 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.058043 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.058121 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-utilities\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.058149 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.058450 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.059037 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-utilities\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.063316 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1508259c-4154-46a3-a390-2200d44b9524-metrics-certs\") pod \"network-metrics-daemon-jlcw6\" (UID: \"1508259c-4154-46a3-a390-2200d44b9524\") " pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.058185 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5jd8\" (UniqueName: \"kubernetes.io/projected/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-kube-api-access-c5jd8\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.066728 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-catalog-content\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.066975 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fbhk\" (UniqueName: \"kubernetes.io/projected/f19c9286-752d-420e-83bb-010eefd59ea1-kube-api-access-5fbhk\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.066997 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f19c9286-752d-420e-83bb-010eefd59ea1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.067012 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f19c9286-752d-420e-83bb-010eefd59ea1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.067481 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.067580 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-catalog-content\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.072178 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.078378 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5jd8\" (UniqueName: \"kubernetes.io/projected/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-kube-api-access-c5jd8\") pod \"redhat-marketplace-n5w42\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.081364 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b57df8f7-vrtxl"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.160937 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.168848 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vthd\" (UniqueName: \"kubernetes.io/projected/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-kube-api-access-9vthd\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.168908 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-client-ca\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.168949 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-config\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.168987 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-proxy-ca-bundles\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.169033 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.169083 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-serving-cert\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.169126 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-client-ca\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.169167 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-serving-cert\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.169203 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.169228 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dfc8\" (UniqueName: \"kubernetes.io/projected/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-kube-api-access-7dfc8\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.169273 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-config\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.169829 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.194804 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270146 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dfc8\" (UniqueName: \"kubernetes.io/projected/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-kube-api-access-7dfc8\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270233 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-config\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270276 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vthd\" (UniqueName: \"kubernetes.io/projected/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-kube-api-access-9vthd\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270306 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-client-ca\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270334 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-config\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270365 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-proxy-ca-bundles\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270405 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-serving-cert\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270442 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-client-ca\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.270472 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-serving-cert\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.272425 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-proxy-ca-bundles\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.272760 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-client-ca\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.272912 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-config\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.272949 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-client-ca\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.274526 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-serving-cert\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.274590 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-config\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.275151 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-serving-cert\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.290500 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dfc8\" (UniqueName: \"kubernetes.io/projected/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-kube-api-access-7dfc8\") pod \"route-controller-manager-8654ff864-lwgxg\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.303013 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vthd\" (UniqueName: \"kubernetes.io/projected/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-kube-api-access-9vthd\") pod \"controller-manager-b57df8f7-vrtxl\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.315646 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.354579 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jlcw6" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.366931 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.388178 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.416729 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5w42"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.432647 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zcv6x"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.434564 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.438392 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.456206 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcv6x"] Feb 24 00:08:09 crc kubenswrapper[4756]: W0224 00:08:09.464588 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f1e9439_3efd_4fdc_97ec_a0864e307ec9.slice/crio-b8f6d42ebc68bf3b4dba93ab758a3d2307ae6e69ff0bdbdd62cc750a8bf6c1b6 WatchSource:0}: Error finding container b8f6d42ebc68bf3b4dba93ab758a3d2307ae6e69ff0bdbdd62cc750a8bf6c1b6: Status 404 returned error can't find the container with id b8f6d42ebc68bf3b4dba93ab758a3d2307ae6e69ff0bdbdd62cc750a8bf6c1b6 Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.465573 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.465654 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.476773 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.574553 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hbk\" (UniqueName: \"kubernetes.io/projected/e83e15d6-954c-439b-b14e-78527bac2d45-kube-api-access-h4hbk\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.574636 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-catalog-content\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.574811 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-utilities\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.596708 4756 generic.go:334] "Generic (PLEG): container finished" podID="5584028b-2ad5-445c-b32a-0a342a022265" containerID="5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959" exitCode=0 Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.597112 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xptg" event={"ID":"5584028b-2ad5-445c-b32a-0a342a022265","Type":"ContainerDied","Data":"5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959"} Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.597209 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xptg" event={"ID":"5584028b-2ad5-445c-b32a-0a342a022265","Type":"ContainerStarted","Data":"cf293041dc71ba6cb7bc08e98713d06257f116d842a692d424c37ed828567ba6"} Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.624009 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.624143 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.629532 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-24rzj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.629602 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-24rzj" podUID="735c2ab3-4d91-4906-81f6-77224425b729" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.635032 4756 patch_prober.go:28] interesting pod/downloads-7954f5f757-24rzj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.635129 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-24rzj" podUID="735c2ab3-4d91-4906-81f6-77224425b729" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.644554 4756 patch_prober.go:28] interesting pod/console-f9d7485db-4jtsc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.644658 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4jtsc" podUID="f79308a5-8560-4d7d-9180-92f05193a4ce" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.686051 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-catalog-content\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.686440 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-utilities\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.686826 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4hbk\" (UniqueName: \"kubernetes.io/projected/e83e15d6-954c-439b-b14e-78527bac2d45-kube-api-access-h4hbk\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.687324 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-catalog-content\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.688164 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" event={"ID":"1edac0e9-6c20-41fe-83ad-6ade8001a0b9","Type":"ContainerStarted","Data":"368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f"} Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.688214 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-utilities\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.707620 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" event={"ID":"f19c9286-752d-420e-83bb-010eefd59ea1","Type":"ContainerDied","Data":"a6b61d0dda76bf5a41ed79026064eecc5dfdd720efee4231b1aceefe917f7b5c"} Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.707684 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b61d0dda76bf5a41ed79026064eecc5dfdd720efee4231b1aceefe917f7b5c" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.707822 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531520-9rhnh" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.717593 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.718905 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.723036 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4hbk\" (UniqueName: \"kubernetes.io/projected/e83e15d6-954c-439b-b14e-78527bac2d45-kube-api-access-h4hbk\") pod \"redhat-operators-zcv6x\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.725716 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.726100 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.736964 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.737592 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.738245 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.764051 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5w42" event={"ID":"4f1e9439-3efd-4fdc-97ec-a0864e307ec9","Type":"ContainerStarted","Data":"b8f6d42ebc68bf3b4dba93ab758a3d2307ae6e69ff0bdbdd62cc750a8bf6c1b6"} Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.775694 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.789118 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.790016 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.790116 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.794092 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.794204 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-tx4dr" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.798529 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:09 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:09 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:09 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.798612 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.889818 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="344d0ffc-25ff-4503-a029-129a7e178a11" path="/var/lib/kubelet/pods/344d0ffc-25ff-4503-a029-129a7e178a11/volumes" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.891987 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.892044 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.892284 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.902511 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb6916d-1721-48bf-b67b-00ccc0144871" path="/var/lib/kubelet/pods/7eb6916d-1721-48bf-b67b-00ccc0144871/volumes" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.903471 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.903943 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p54ks"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.905773 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.907677 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p54ks"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.937600 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.942598 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jlcw6"] Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.999453 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-catalog-content\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.999604 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-utilities\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:09 crc kubenswrapper[4756]: I0224 00:08:09.999648 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj98q\" (UniqueName: \"kubernetes.io/projected/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-kube-api-access-mj98q\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.045054 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg"] Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.079404 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.100701 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-catalog-content\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.100781 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-utilities\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.100808 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj98q\" (UniqueName: \"kubernetes.io/projected/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-kube-api-access-mj98q\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.101675 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-catalog-content\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.101905 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-utilities\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.144998 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj98q\" (UniqueName: \"kubernetes.io/projected/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-kube-api-access-mj98q\") pod \"redhat-operators-p54ks\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.186322 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b57df8f7-vrtxl"] Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.341988 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.555271 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.697983 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcv6x"] Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.792198 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.805697 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:10 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:10 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:10 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.805783 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.813346 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.960637 4756 generic.go:334] "Generic (PLEG): container finished" podID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerID="40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431" exitCode=0 Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.961346 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5w42" event={"ID":"4f1e9439-3efd-4fdc-97ec-a0864e307ec9","Type":"ContainerDied","Data":"40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431"} Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.988178 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jlcw6" event={"ID":"1508259c-4154-46a3-a390-2200d44b9524","Type":"ContainerStarted","Data":"0b7d292df57f2c32c49739dc1224fe2235193d21293ce6af51fbd33d9ec91d33"} Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.988235 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jlcw6" event={"ID":"1508259c-4154-46a3-a390-2200d44b9524","Type":"ContainerStarted","Data":"546169b86588126bcded1436d4231752b313b501e5ee88989df75970cd46ff45"} Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.994221 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"855bc0f2-7551-4683-a84b-7f88e3d93d3c","Type":"ContainerStarted","Data":"7a4e5695f57c227c9be7d0895069f3df7c68a634138ed52cf8ec16b81fe34921"} Feb 24 00:08:10 crc kubenswrapper[4756]: I0224 00:08:10.999252 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b","Type":"ContainerStarted","Data":"b7d799085dedd90fe55b93211cba1037900bd0d533353992df3abbd174242e85"} Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.002573 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" event={"ID":"bab5942a-cef0-47a2-bf68-dca1c8ac14fa","Type":"ContainerStarted","Data":"72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807"} Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.002720 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" event={"ID":"bab5942a-cef0-47a2-bf68-dca1c8ac14fa","Type":"ContainerStarted","Data":"60b2f316c3ba9d650b8c6ca0b4a774df42197de177567dd4eb0c1eb768e232bd"} Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.008356 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.022314 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.023319 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcv6x" event={"ID":"e83e15d6-954c-439b-b14e-78527bac2d45","Type":"ContainerStarted","Data":"568a32edcea383571e16973a06878f42139241ae25d752022dfa8e1a3f54dc78"} Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.040299 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" podStartSLOduration=4.040250261 podStartE2EDuration="4.040250261s" podCreationTimestamp="2026-02-24 00:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:11.033438696 +0000 UTC m=+147.944301329" watchObservedRunningTime="2026-02-24 00:08:11.040250261 +0000 UTC m=+147.951112894" Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.042806 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p54ks"] Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.049619 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" event={"ID":"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1","Type":"ContainerStarted","Data":"af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4"} Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.049674 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" event={"ID":"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1","Type":"ContainerStarted","Data":"500b971cd179cdc596d0923b788b3917c270e6abb99043615aa747cca39cdfaa"} Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.050723 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.056238 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pbrtc" Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.067522 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.209338 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" podStartSLOduration=4.209320725 podStartE2EDuration="4.209320725s" podCreationTimestamp="2026-02-24 00:08:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:11.177227868 +0000 UTC m=+148.088090511" watchObservedRunningTime="2026-02-24 00:08:11.209320725 +0000 UTC m=+148.120183358" Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.807679 4756 patch_prober.go:28] interesting pod/router-default-5444994796-522ht container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 24 00:08:11 crc kubenswrapper[4756]: [-]has-synced failed: reason withheld Feb 24 00:08:11 crc kubenswrapper[4756]: [+]process-running ok Feb 24 00:08:11 crc kubenswrapper[4756]: healthz check failed Feb 24 00:08:11 crc kubenswrapper[4756]: I0224 00:08:11.810033 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-522ht" podUID="454c827a-ccb7-4b43-ac8a-bd1b38e77616" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.127971 4756 generic.go:334] "Generic (PLEG): container finished" podID="e83e15d6-954c-439b-b14e-78527bac2d45" containerID="0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8" exitCode=0 Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.128427 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcv6x" event={"ID":"e83e15d6-954c-439b-b14e-78527bac2d45","Type":"ContainerDied","Data":"0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8"} Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.151940 4756 generic.go:334] "Generic (PLEG): container finished" podID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerID="05fde111fb74f7453f4b104a8abc13d2242880114f65ddf6e2a539ad1635b002" exitCode=0 Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.152017 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p54ks" event={"ID":"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782","Type":"ContainerDied","Data":"05fde111fb74f7453f4b104a8abc13d2242880114f65ddf6e2a539ad1635b002"} Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.152053 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p54ks" event={"ID":"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782","Type":"ContainerStarted","Data":"442e57e9d1fc36613052788edddeb211bf4cba33a3769d771a39464481f5a4ca"} Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.211435 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jlcw6" event={"ID":"1508259c-4154-46a3-a390-2200d44b9524","Type":"ContainerStarted","Data":"e9a56e13cb80f1d81a7b8e709ad2528cdd775b810e1444ffb2ee2781a531b25c"} Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.220242 4756 generic.go:334] "Generic (PLEG): container finished" podID="855bc0f2-7551-4683-a84b-7f88e3d93d3c" containerID="615c95fb45f3c9b3f119b1eb444a926ffa7a6131b728f8e76803464c101b2756" exitCode=0 Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.220393 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"855bc0f2-7551-4683-a84b-7f88e3d93d3c","Type":"ContainerDied","Data":"615c95fb45f3c9b3f119b1eb444a926ffa7a6131b728f8e76803464c101b2756"} Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.225719 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b","Type":"ContainerStarted","Data":"d605e2abcb3122ef35631f700bdc50fa0b2c07864507c5ba1cbc5513aaeb467c"} Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.276816 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.276794793 podStartE2EDuration="3.276794793s" podCreationTimestamp="2026-02-24 00:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:12.274289097 +0000 UTC m=+149.185151720" watchObservedRunningTime="2026-02-24 00:08:12.276794793 +0000 UTC m=+149.187657426" Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.803341 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:12 crc kubenswrapper[4756]: I0224 00:08:12.812794 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-522ht" Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.241975 4756 generic.go:334] "Generic (PLEG): container finished" podID="b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b" containerID="d605e2abcb3122ef35631f700bdc50fa0b2c07864507c5ba1cbc5513aaeb467c" exitCode=0 Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.243523 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b","Type":"ContainerDied","Data":"d605e2abcb3122ef35631f700bdc50fa0b2c07864507c5ba1cbc5513aaeb467c"} Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.306105 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jlcw6" podStartSLOduration=89.306076442 podStartE2EDuration="1m29.306076442s" podCreationTimestamp="2026-02-24 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:13.305677538 +0000 UTC m=+150.216540181" watchObservedRunningTime="2026-02-24 00:08:13.306076442 +0000 UTC m=+150.216939075" Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.735339 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.867977 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kubelet-dir\") pod \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\" (UID: \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\") " Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.868108 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kube-api-access\") pod \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\" (UID: \"855bc0f2-7551-4683-a84b-7f88e3d93d3c\") " Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.868864 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "855bc0f2-7551-4683-a84b-7f88e3d93d3c" (UID: "855bc0f2-7551-4683-a84b-7f88e3d93d3c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.878336 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "855bc0f2-7551-4683-a84b-7f88e3d93d3c" (UID: "855bc0f2-7551-4683-a84b-7f88e3d93d3c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.969872 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:13 crc kubenswrapper[4756]: I0224 00:08:13.969905 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/855bc0f2-7551-4683-a84b-7f88e3d93d3c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:14 crc kubenswrapper[4756]: I0224 00:08:14.290755 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"855bc0f2-7551-4683-a84b-7f88e3d93d3c","Type":"ContainerDied","Data":"7a4e5695f57c227c9be7d0895069f3df7c68a634138ed52cf8ec16b81fe34921"} Feb 24 00:08:14 crc kubenswrapper[4756]: I0224 00:08:14.290818 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a4e5695f57c227c9be7d0895069f3df7c68a634138ed52cf8ec16b81fe34921" Feb 24 00:08:14 crc kubenswrapper[4756]: I0224 00:08:14.294757 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 24 00:08:14 crc kubenswrapper[4756]: I0224 00:08:14.726195 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:14 crc kubenswrapper[4756]: I0224 00:08:14.911883 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kube-api-access\") pod \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\" (UID: \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\") " Feb 24 00:08:14 crc kubenswrapper[4756]: I0224 00:08:14.912013 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kubelet-dir\") pod \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\" (UID: \"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b\") " Feb 24 00:08:14 crc kubenswrapper[4756]: I0224 00:08:14.912478 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b" (UID: "b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:08:14 crc kubenswrapper[4756]: I0224 00:08:14.917831 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b" (UID: "b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:08:15 crc kubenswrapper[4756]: I0224 00:08:15.014687 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:15 crc kubenswrapper[4756]: I0224 00:08:15.014721 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:15 crc kubenswrapper[4756]: I0224 00:08:15.312690 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b","Type":"ContainerDied","Data":"b7d799085dedd90fe55b93211cba1037900bd0d533353992df3abbd174242e85"} Feb 24 00:08:15 crc kubenswrapper[4756]: I0224 00:08:15.312767 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7d799085dedd90fe55b93211cba1037900bd0d533353992df3abbd174242e85" Feb 24 00:08:15 crc kubenswrapper[4756]: I0224 00:08:15.312823 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 24 00:08:15 crc kubenswrapper[4756]: I0224 00:08:15.571190 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qrnfg" Feb 24 00:08:19 crc kubenswrapper[4756]: I0224 00:08:19.625418 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:08:19 crc kubenswrapper[4756]: I0224 00:08:19.632513 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4jtsc" Feb 24 00:08:19 crc kubenswrapper[4756]: I0224 00:08:19.640537 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-24rzj" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.151759 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.763037 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.766429 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.865211 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.865812 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.865858 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.873887 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.874824 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:28 crc kubenswrapper[4756]: I0224 00:08:28.875304 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:29 crc kubenswrapper[4756]: I0224 00:08:29.057218 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 24 00:08:29 crc kubenswrapper[4756]: I0224 00:08:29.074388 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:29 crc kubenswrapper[4756]: I0224 00:08:29.081851 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 24 00:08:33 crc kubenswrapper[4756]: I0224 00:08:33.460493 4756 generic.go:334] "Generic (PLEG): container finished" podID="68e02cbc-c03c-45cd-916a-16dd3b0052cd" containerID="a7e922e372e3aa3ac209722cafacddf1998a86c937e1b9c1a766a2ac48852521" exitCode=0 Feb 24 00:08:33 crc kubenswrapper[4756]: I0224 00:08:33.460623 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-pdn56" event={"ID":"68e02cbc-c03c-45cd-916a-16dd3b0052cd","Type":"ContainerDied","Data":"a7e922e372e3aa3ac209722cafacddf1998a86c937e1b9c1a766a2ac48852521"} Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.433904 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.516742 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29531520-pdn56" event={"ID":"68e02cbc-c03c-45cd-916a-16dd3b0052cd","Type":"ContainerDied","Data":"32d56113c6686933c1913c1dc18fa0eaf88cc4683ef89bfe832b545f1eeabf55"} Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.516825 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d56113c6686933c1913c1dc18fa0eaf88cc4683ef89bfe832b545f1eeabf55" Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.516972 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.532283 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/68e02cbc-c03c-45cd-916a-16dd3b0052cd-serviceca\") pod \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\" (UID: \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\") " Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.532388 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wlxw\" (UniqueName: \"kubernetes.io/projected/68e02cbc-c03c-45cd-916a-16dd3b0052cd-kube-api-access-7wlxw\") pod \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\" (UID: \"68e02cbc-c03c-45cd-916a-16dd3b0052cd\") " Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.533964 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68e02cbc-c03c-45cd-916a-16dd3b0052cd-serviceca" (OuterVolumeSpecName: "serviceca") pod "68e02cbc-c03c-45cd-916a-16dd3b0052cd" (UID: "68e02cbc-c03c-45cd-916a-16dd3b0052cd"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.552330 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e02cbc-c03c-45cd-916a-16dd3b0052cd-kube-api-access-7wlxw" (OuterVolumeSpecName: "kube-api-access-7wlxw") pod "68e02cbc-c03c-45cd-916a-16dd3b0052cd" (UID: "68e02cbc-c03c-45cd-916a-16dd3b0052cd"). InnerVolumeSpecName "kube-api-access-7wlxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.634479 4756 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/68e02cbc-c03c-45cd-916a-16dd3b0052cd-serviceca\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:39 crc kubenswrapper[4756]: I0224 00:08:39.634775 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wlxw\" (UniqueName: \"kubernetes.io/projected/68e02cbc-c03c-45cd-916a-16dd3b0052cd-kube-api-access-7wlxw\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:39 crc kubenswrapper[4756]: W0224 00:08:39.914085 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-23d9195357c5c121501b8f55fe894aeca1fe7f264172c4227a5f19c6dbc7f33e WatchSource:0}: Error finding container 23d9195357c5c121501b8f55fe894aeca1fe7f264172c4227a5f19c6dbc7f33e: Status 404 returned error can't find the container with id 23d9195357c5c121501b8f55fe894aeca1fe7f264172c4227a5f19c6dbc7f33e Feb 24 00:08:40 crc kubenswrapper[4756]: W0224 00:08:40.051776 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-22c9570ed27a36ec5466ade4c8627aab3231b493fdedd1a20ece328c8ca93bfe WatchSource:0}: Error finding container 22c9570ed27a36ec5466ade4c8627aab3231b493fdedd1a20ece328c8ca93bfe: Status 404 returned error can't find the container with id 22c9570ed27a36ec5466ade4c8627aab3231b493fdedd1a20ece328c8ca93bfe Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.102293 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hqsxk" Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.527981 4756 generic.go:334] "Generic (PLEG): container finished" podID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerID="6bd8e0b86721602e1f75747385dab730dec99960b93e5ca7b29e90381cb2a22b" exitCode=0 Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.528128 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z94cj" event={"ID":"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7","Type":"ContainerDied","Data":"6bd8e0b86721602e1f75747385dab730dec99960b93e5ca7b29e90381cb2a22b"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.533966 4756 generic.go:334] "Generic (PLEG): container finished" podID="789b9f46-f242-43cc-b598-a8571134461d" containerID="0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad" exitCode=0 Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.534146 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7jz2" event={"ID":"789b9f46-f242-43cc-b598-a8571134461d","Type":"ContainerDied","Data":"0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.536919 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcv6x" event={"ID":"e83e15d6-954c-439b-b14e-78527bac2d45","Type":"ContainerStarted","Data":"044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.542522 4756 generic.go:334] "Generic (PLEG): container finished" podID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerID="bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6" exitCode=0 Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.542597 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5w42" event={"ID":"4f1e9439-3efd-4fdc-97ec-a0864e307ec9","Type":"ContainerDied","Data":"bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.554257 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"818e36a25d3744c47c3e0550f776019d4225f803ac491c582237ea95d35fa87f"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.554320 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"23d9195357c5c121501b8f55fe894aeca1fe7f264172c4227a5f19c6dbc7f33e"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.556222 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.560878 4756 generic.go:334] "Generic (PLEG): container finished" podID="5584028b-2ad5-445c-b32a-0a342a022265" containerID="9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58" exitCode=0 Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.561094 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xptg" event={"ID":"5584028b-2ad5-445c-b32a-0a342a022265","Type":"ContainerDied","Data":"9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.581900 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"957bea32b565ef9ef40fdf0e7684750844124652a6dbe5b6660472f76f69b841"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.581965 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bb4ee1e51825c89db3e5db5bedec8234f27bb8942c16b2d7963d70a70de4a411"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.587008 4756 generic.go:334] "Generic (PLEG): container finished" podID="5245508c-ca61-4b00-8909-21b4abc6458a" containerID="b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146" exitCode=0 Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.587140 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4z4" event={"ID":"5245508c-ca61-4b00-8909-21b4abc6458a","Type":"ContainerDied","Data":"b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.590438 4756 generic.go:334] "Generic (PLEG): container finished" podID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerID="6013f27e2d149dcc61355664c250709bbcef0f19a8c1ff8889c317e558de36a0" exitCode=0 Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.590464 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9hch" event={"ID":"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4","Type":"ContainerDied","Data":"6013f27e2d149dcc61355664c250709bbcef0f19a8c1ff8889c317e558de36a0"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.591764 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"521fd616bf2866c5c14dc87cecd2e4cc9035fad33b68518444554886e82f79ba"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.591787 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"22c9570ed27a36ec5466ade4c8627aab3231b493fdedd1a20ece328c8ca93bfe"} Feb 24 00:08:40 crc kubenswrapper[4756]: I0224 00:08:40.595922 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p54ks" event={"ID":"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782","Type":"ContainerStarted","Data":"e540c810bc48901ee29d5ef2bd960dd338208359092c271646760af9f6f7c38e"} Feb 24 00:08:41 crc kubenswrapper[4756]: I0224 00:08:41.604092 4756 generic.go:334] "Generic (PLEG): container finished" podID="e83e15d6-954c-439b-b14e-78527bac2d45" containerID="044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a" exitCode=0 Feb 24 00:08:41 crc kubenswrapper[4756]: I0224 00:08:41.604186 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcv6x" event={"ID":"e83e15d6-954c-439b-b14e-78527bac2d45","Type":"ContainerDied","Data":"044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a"} Feb 24 00:08:41 crc kubenswrapper[4756]: I0224 00:08:41.610542 4756 generic.go:334] "Generic (PLEG): container finished" podID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerID="e540c810bc48901ee29d5ef2bd960dd338208359092c271646760af9f6f7c38e" exitCode=0 Feb 24 00:08:41 crc kubenswrapper[4756]: I0224 00:08:41.610943 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p54ks" event={"ID":"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782","Type":"ContainerDied","Data":"e540c810bc48901ee29d5ef2bd960dd338208359092c271646760af9f6f7c38e"} Feb 24 00:08:43 crc kubenswrapper[4756]: I0224 00:08:43.625150 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xptg" event={"ID":"5584028b-2ad5-445c-b32a-0a342a022265","Type":"ContainerStarted","Data":"88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046"} Feb 24 00:08:43 crc kubenswrapper[4756]: I0224 00:08:43.649045 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2xptg" podStartSLOduration=2.720117653 podStartE2EDuration="35.64901905s" podCreationTimestamp="2026-02-24 00:08:08 +0000 UTC" firstStartedPulling="2026-02-24 00:08:09.645440307 +0000 UTC m=+146.556302940" lastFinishedPulling="2026-02-24 00:08:42.574341694 +0000 UTC m=+179.485204337" observedRunningTime="2026-02-24 00:08:43.645389625 +0000 UTC m=+180.556252258" watchObservedRunningTime="2026-02-24 00:08:43.64901905 +0000 UTC m=+180.559881683" Feb 24 00:08:44 crc kubenswrapper[4756]: I0224 00:08:44.636600 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p54ks" event={"ID":"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782","Type":"ContainerStarted","Data":"e48c7be0515108d945e282d59a56892d4e48c74ed09a5e438bac96506f26c6c6"} Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.059506 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 00:08:45 crc kubenswrapper[4756]: E0224 00:08:45.059792 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b" containerName="pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.059809 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b" containerName="pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: E0224 00:08:45.059821 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855bc0f2-7551-4683-a84b-7f88e3d93d3c" containerName="pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.059827 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="855bc0f2-7551-4683-a84b-7f88e3d93d3c" containerName="pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: E0224 00:08:45.059844 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e02cbc-c03c-45cd-916a-16dd3b0052cd" containerName="image-pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.059851 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e02cbc-c03c-45cd-916a-16dd3b0052cd" containerName="image-pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.059950 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b168c9ec-7d78-45ce-95ef-d2ca48bfdd4b" containerName="pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.059964 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="855bc0f2-7551-4683-a84b-7f88e3d93d3c" containerName="pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.059975 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e02cbc-c03c-45cd-916a-16dd3b0052cd" containerName="image-pruner" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.060526 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.063611 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.063840 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.069144 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.120952 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed45e726-5123-48be-baf6-ba78d890b533-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed45e726-5123-48be-baf6-ba78d890b533\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.121075 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed45e726-5123-48be-baf6-ba78d890b533-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed45e726-5123-48be-baf6-ba78d890b533\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.222218 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed45e726-5123-48be-baf6-ba78d890b533-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed45e726-5123-48be-baf6-ba78d890b533\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.222605 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed45e726-5123-48be-baf6-ba78d890b533-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed45e726-5123-48be-baf6-ba78d890b533\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.222701 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed45e726-5123-48be-baf6-ba78d890b533-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed45e726-5123-48be-baf6-ba78d890b533\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.243640 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed45e726-5123-48be-baf6-ba78d890b533-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed45e726-5123-48be-baf6-ba78d890b533\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.388678 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:45 crc kubenswrapper[4756]: I0224 00:08:45.660997 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p54ks" podStartSLOduration=6.492755662 podStartE2EDuration="36.660977501s" podCreationTimestamp="2026-02-24 00:08:09 +0000 UTC" firstStartedPulling="2026-02-24 00:08:13.247853143 +0000 UTC m=+150.158715776" lastFinishedPulling="2026-02-24 00:08:43.416074982 +0000 UTC m=+180.326937615" observedRunningTime="2026-02-24 00:08:45.659339224 +0000 UTC m=+182.570201887" watchObservedRunningTime="2026-02-24 00:08:45.660977501 +0000 UTC m=+182.571840134" Feb 24 00:08:46 crc kubenswrapper[4756]: I0224 00:08:46.849487 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.711174 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z94cj" event={"ID":"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7","Type":"ContainerStarted","Data":"88a6cc38612d943de3a99d73f8432f95a2e1d2261b877670bd4c071bced9dbeb"} Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.717554 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4z4" event={"ID":"5245508c-ca61-4b00-8909-21b4abc6458a","Type":"ContainerStarted","Data":"4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853"} Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.719504 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcv6x" event={"ID":"e83e15d6-954c-439b-b14e-78527bac2d45","Type":"ContainerStarted","Data":"af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13"} Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.725263 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9hch" event={"ID":"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4","Type":"ContainerStarted","Data":"9fa074e18c5dc651dc3ce9dc7f75c47eb946f7ca3e5c1c837ea430733a43e735"} Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.730782 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5w42" event={"ID":"4f1e9439-3efd-4fdc-97ec-a0864e307ec9","Type":"ContainerStarted","Data":"b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c"} Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.736167 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z94cj" podStartSLOduration=3.906921571 podStartE2EDuration="41.736149414s" podCreationTimestamp="2026-02-24 00:08:06 +0000 UTC" firstStartedPulling="2026-02-24 00:08:08.541359536 +0000 UTC m=+145.452222169" lastFinishedPulling="2026-02-24 00:08:46.370587379 +0000 UTC m=+183.281450012" observedRunningTime="2026-02-24 00:08:47.732657363 +0000 UTC m=+184.643519996" watchObservedRunningTime="2026-02-24 00:08:47.736149414 +0000 UTC m=+184.647012067" Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.735542 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ed45e726-5123-48be-baf6-ba78d890b533","Type":"ContainerStarted","Data":"4918decdecb62f242494f1bcc07936155846c46565a894cd6914c9060b9db804"} Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.736545 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ed45e726-5123-48be-baf6-ba78d890b533","Type":"ContainerStarted","Data":"a0eb306b3e8b074465cc83c987ea988c2ebb06773d9c46a111cdf9d530caf0bb"} Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.737562 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7jz2" event={"ID":"789b9f46-f242-43cc-b598-a8571134461d","Type":"ContainerStarted","Data":"28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79"} Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.765002 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gl4z4" podStartSLOduration=3.7437818309999997 podStartE2EDuration="41.764980168s" podCreationTimestamp="2026-02-24 00:08:06 +0000 UTC" firstStartedPulling="2026-02-24 00:08:08.569365813 +0000 UTC m=+145.480228456" lastFinishedPulling="2026-02-24 00:08:46.59056416 +0000 UTC m=+183.501426793" observedRunningTime="2026-02-24 00:08:47.763346732 +0000 UTC m=+184.674209385" watchObservedRunningTime="2026-02-24 00:08:47.764980168 +0000 UTC m=+184.675842801" Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.785075 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t9hch" podStartSLOduration=3.8853623859999997 podStartE2EDuration="41.78502997s" podCreationTimestamp="2026-02-24 00:08:06 +0000 UTC" firstStartedPulling="2026-02-24 00:08:08.508492332 +0000 UTC m=+145.419354965" lastFinishedPulling="2026-02-24 00:08:46.408159916 +0000 UTC m=+183.319022549" observedRunningTime="2026-02-24 00:08:47.784290535 +0000 UTC m=+184.695153178" watchObservedRunningTime="2026-02-24 00:08:47.78502997 +0000 UTC m=+184.695892603" Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.862584 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5w42" podStartSLOduration=4.302010084 podStartE2EDuration="39.862547346s" podCreationTimestamp="2026-02-24 00:08:08 +0000 UTC" firstStartedPulling="2026-02-24 00:08:10.981257055 +0000 UTC m=+147.892119688" lastFinishedPulling="2026-02-24 00:08:46.541794317 +0000 UTC m=+183.452656950" observedRunningTime="2026-02-24 00:08:47.823554 +0000 UTC m=+184.734416633" watchObservedRunningTime="2026-02-24 00:08:47.862547346 +0000 UTC m=+184.773409989" Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.863782 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zcv6x" podStartSLOduration=5.002637221 podStartE2EDuration="38.863775938s" podCreationTimestamp="2026-02-24 00:08:09 +0000 UTC" firstStartedPulling="2026-02-24 00:08:12.15121657 +0000 UTC m=+149.062079193" lastFinishedPulling="2026-02-24 00:08:46.012355277 +0000 UTC m=+182.923217910" observedRunningTime="2026-02-24 00:08:47.855986689 +0000 UTC m=+184.766849362" watchObservedRunningTime="2026-02-24 00:08:47.863775938 +0000 UTC m=+184.774638571" Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.879591 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.879566563 podStartE2EDuration="2.879566563s" podCreationTimestamp="2026-02-24 00:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:47.874369963 +0000 UTC m=+184.785232596" watchObservedRunningTime="2026-02-24 00:08:47.879566563 +0000 UTC m=+184.790429196" Feb 24 00:08:47 crc kubenswrapper[4756]: I0224 00:08:47.903930 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t7jz2" podStartSLOduration=4.026094994 podStartE2EDuration="41.903904053s" podCreationTimestamp="2026-02-24 00:08:06 +0000 UTC" firstStartedPulling="2026-02-24 00:08:08.521377647 +0000 UTC m=+145.432240270" lastFinishedPulling="2026-02-24 00:08:46.399186696 +0000 UTC m=+183.310049329" observedRunningTime="2026-02-24 00:08:47.901937475 +0000 UTC m=+184.812800118" watchObservedRunningTime="2026-02-24 00:08:47.903904053 +0000 UTC m=+184.814766696" Feb 24 00:08:48 crc kubenswrapper[4756]: I0224 00:08:48.171025 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5qkbl"] Feb 24 00:08:48 crc kubenswrapper[4756]: I0224 00:08:48.745721 4756 generic.go:334] "Generic (PLEG): container finished" podID="ed45e726-5123-48be-baf6-ba78d890b533" containerID="4918decdecb62f242494f1bcc07936155846c46565a894cd6914c9060b9db804" exitCode=0 Feb 24 00:08:48 crc kubenswrapper[4756]: I0224 00:08:48.746626 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ed45e726-5123-48be-baf6-ba78d890b533","Type":"ContainerDied","Data":"4918decdecb62f242494f1bcc07936155846c46565a894cd6914c9060b9db804"} Feb 24 00:08:48 crc kubenswrapper[4756]: I0224 00:08:48.749198 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:48 crc kubenswrapper[4756]: I0224 00:08:48.749253 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:48 crc kubenswrapper[4756]: I0224 00:08:48.909987 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:49 crc kubenswrapper[4756]: I0224 00:08:49.161971 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:49 crc kubenswrapper[4756]: I0224 00:08:49.162083 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:49 crc kubenswrapper[4756]: I0224 00:08:49.789845 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:49 crc kubenswrapper[4756]: I0224 00:08:49.789925 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:49 crc kubenswrapper[4756]: I0224 00:08:49.829906 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.031611 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.101681 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed45e726-5123-48be-baf6-ba78d890b533-kube-api-access\") pod \"ed45e726-5123-48be-baf6-ba78d890b533\" (UID: \"ed45e726-5123-48be-baf6-ba78d890b533\") " Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.102215 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed45e726-5123-48be-baf6-ba78d890b533-kubelet-dir\") pod \"ed45e726-5123-48be-baf6-ba78d890b533\" (UID: \"ed45e726-5123-48be-baf6-ba78d890b533\") " Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.102291 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed45e726-5123-48be-baf6-ba78d890b533-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ed45e726-5123-48be-baf6-ba78d890b533" (UID: "ed45e726-5123-48be-baf6-ba78d890b533"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.102624 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed45e726-5123-48be-baf6-ba78d890b533-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.109103 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed45e726-5123-48be-baf6-ba78d890b533-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ed45e726-5123-48be-baf6-ba78d890b533" (UID: "ed45e726-5123-48be-baf6-ba78d890b533"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.199983 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-n5w42" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="registry-server" probeResult="failure" output=< Feb 24 00:08:50 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Feb 24 00:08:50 crc kubenswrapper[4756]: > Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.203985 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed45e726-5123-48be-baf6-ba78d890b533-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.342461 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.342538 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.760840 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.760822 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ed45e726-5123-48be-baf6-ba78d890b533","Type":"ContainerDied","Data":"a0eb306b3e8b074465cc83c987ea988c2ebb06773d9c46a111cdf9d530caf0bb"} Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.760912 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0eb306b3e8b074465cc83c987ea988c2ebb06773d9c46a111cdf9d530caf0bb" Feb 24 00:08:50 crc kubenswrapper[4756]: I0224 00:08:50.837797 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zcv6x" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="registry-server" probeResult="failure" output=< Feb 24 00:08:50 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Feb 24 00:08:50 crc kubenswrapper[4756]: > Feb 24 00:08:51 crc kubenswrapper[4756]: I0224 00:08:51.381031 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p54ks" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="registry-server" probeResult="failure" output=< Feb 24 00:08:51 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Feb 24 00:08:51 crc kubenswrapper[4756]: > Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.259234 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 00:08:52 crc kubenswrapper[4756]: E0224 00:08:52.259584 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed45e726-5123-48be-baf6-ba78d890b533" containerName="pruner" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.259610 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed45e726-5123-48be-baf6-ba78d890b533" containerName="pruner" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.259754 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed45e726-5123-48be-baf6-ba78d890b533" containerName="pruner" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.263271 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.265525 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.266729 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.274368 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.328678 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-var-lock\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.328739 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f14689d-861e-4ded-a7b3-d250eec9093e-kube-api-access\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.328776 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.429374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-var-lock\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.429430 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f14689d-861e-4ded-a7b3-d250eec9093e-kube-api-access\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.429455 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.429558 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.429598 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-var-lock\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.460755 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f14689d-861e-4ded-a7b3-d250eec9093e-kube-api-access\") pod \"installer-9-crc\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:52 crc kubenswrapper[4756]: I0224 00:08:52.629313 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:08:53 crc kubenswrapper[4756]: I0224 00:08:53.113640 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 24 00:08:53 crc kubenswrapper[4756]: W0224 00:08:53.126285 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4f14689d_861e_4ded_a7b3_d250eec9093e.slice/crio-ef675dd6450c5236cc2931c4ac6d35681c89f09a0a4fdf0f891307bb5512d2e8 WatchSource:0}: Error finding container ef675dd6450c5236cc2931c4ac6d35681c89f09a0a4fdf0f891307bb5512d2e8: Status 404 returned error can't find the container with id ef675dd6450c5236cc2931c4ac6d35681c89f09a0a4fdf0f891307bb5512d2e8 Feb 24 00:08:53 crc kubenswrapper[4756]: I0224 00:08:53.786868 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4f14689d-861e-4ded-a7b3-d250eec9093e","Type":"ContainerStarted","Data":"945ac1263b26caf9a3d4bf3fb384f8e100030b6a298f4e3af18644ba919e742a"} Feb 24 00:08:53 crc kubenswrapper[4756]: I0224 00:08:53.787318 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4f14689d-861e-4ded-a7b3-d250eec9093e","Type":"ContainerStarted","Data":"ef675dd6450c5236cc2931c4ac6d35681c89f09a0a4fdf0f891307bb5512d2e8"} Feb 24 00:08:53 crc kubenswrapper[4756]: I0224 00:08:53.813102 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.813050782 podStartE2EDuration="1.813050782s" podCreationTimestamp="2026-02-24 00:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:08:53.805634026 +0000 UTC m=+190.716496699" watchObservedRunningTime="2026-02-24 00:08:53.813050782 +0000 UTC m=+190.723913415" Feb 24 00:08:56 crc kubenswrapper[4756]: I0224 00:08:56.606856 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:56 crc kubenswrapper[4756]: I0224 00:08:56.607327 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:56 crc kubenswrapper[4756]: I0224 00:08:56.666057 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:56 crc kubenswrapper[4756]: I0224 00:08:56.836023 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:56 crc kubenswrapper[4756]: I0224 00:08:56.836109 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:56 crc kubenswrapper[4756]: I0224 00:08:56.850267 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:08:56 crc kubenswrapper[4756]: I0224 00:08:56.887718 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.003736 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.003824 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.048868 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.255860 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.255929 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.303524 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.864094 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.872773 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:08:57 crc kubenswrapper[4756]: I0224 00:08:57.882233 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:08:58 crc kubenswrapper[4756]: I0224 00:08:58.702912 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gl4z4"] Feb 24 00:08:59 crc kubenswrapper[4756]: I0224 00:08:59.221420 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:59 crc kubenswrapper[4756]: I0224 00:08:59.301635 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:08:59 crc kubenswrapper[4756]: I0224 00:08:59.307370 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7jz2"] Feb 24 00:08:59 crc kubenswrapper[4756]: I0224 00:08:59.832948 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t7jz2" podUID="789b9f46-f242-43cc-b598-a8571134461d" containerName="registry-server" containerID="cri-o://28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79" gracePeriod=2 Feb 24 00:08:59 crc kubenswrapper[4756]: I0224 00:08:59.833444 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gl4z4" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" containerName="registry-server" containerID="cri-o://4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853" gracePeriod=2 Feb 24 00:08:59 crc kubenswrapper[4756]: I0224 00:08:59.857940 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:08:59 crc kubenswrapper[4756]: I0224 00:08:59.910634 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.213855 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.263151 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.302033 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-catalog-content\") pod \"789b9f46-f242-43cc-b598-a8571134461d\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.302253 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2cvj\" (UniqueName: \"kubernetes.io/projected/789b9f46-f242-43cc-b598-a8571134461d-kube-api-access-t2cvj\") pod \"789b9f46-f242-43cc-b598-a8571134461d\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.302324 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-utilities\") pod \"789b9f46-f242-43cc-b598-a8571134461d\" (UID: \"789b9f46-f242-43cc-b598-a8571134461d\") " Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.303936 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-utilities" (OuterVolumeSpecName: "utilities") pod "789b9f46-f242-43cc-b598-a8571134461d" (UID: "789b9f46-f242-43cc-b598-a8571134461d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.311013 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789b9f46-f242-43cc-b598-a8571134461d-kube-api-access-t2cvj" (OuterVolumeSpecName: "kube-api-access-t2cvj") pod "789b9f46-f242-43cc-b598-a8571134461d" (UID: "789b9f46-f242-43cc-b598-a8571134461d"). InnerVolumeSpecName "kube-api-access-t2cvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.364282 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "789b9f46-f242-43cc-b598-a8571134461d" (UID: "789b9f46-f242-43cc-b598-a8571134461d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.404492 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-catalog-content\") pod \"5245508c-ca61-4b00-8909-21b4abc6458a\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.404574 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-utilities\") pod \"5245508c-ca61-4b00-8909-21b4abc6458a\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.404630 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k6pb\" (UniqueName: \"kubernetes.io/projected/5245508c-ca61-4b00-8909-21b4abc6458a-kube-api-access-7k6pb\") pod \"5245508c-ca61-4b00-8909-21b4abc6458a\" (UID: \"5245508c-ca61-4b00-8909-21b4abc6458a\") " Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.405027 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.405058 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2cvj\" (UniqueName: \"kubernetes.io/projected/789b9f46-f242-43cc-b598-a8571134461d-kube-api-access-t2cvj\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.405104 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/789b9f46-f242-43cc-b598-a8571134461d-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.406842 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-utilities" (OuterVolumeSpecName: "utilities") pod "5245508c-ca61-4b00-8909-21b4abc6458a" (UID: "5245508c-ca61-4b00-8909-21b4abc6458a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.411142 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5245508c-ca61-4b00-8909-21b4abc6458a-kube-api-access-7k6pb" (OuterVolumeSpecName: "kube-api-access-7k6pb") pod "5245508c-ca61-4b00-8909-21b4abc6458a" (UID: "5245508c-ca61-4b00-8909-21b4abc6458a"). InnerVolumeSpecName "kube-api-access-7k6pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.414415 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.459423 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.484746 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5245508c-ca61-4b00-8909-21b4abc6458a" (UID: "5245508c-ca61-4b00-8909-21b4abc6458a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.507165 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.507215 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5245508c-ca61-4b00-8909-21b4abc6458a-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.507230 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k6pb\" (UniqueName: \"kubernetes.io/projected/5245508c-ca61-4b00-8909-21b4abc6458a-kube-api-access-7k6pb\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.851652 4756 generic.go:334] "Generic (PLEG): container finished" podID="789b9f46-f242-43cc-b598-a8571134461d" containerID="28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79" exitCode=0 Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.851731 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7jz2" event={"ID":"789b9f46-f242-43cc-b598-a8571134461d","Type":"ContainerDied","Data":"28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79"} Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.851865 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t7jz2" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.852310 4756 scope.go:117] "RemoveContainer" containerID="28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.852279 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t7jz2" event={"ID":"789b9f46-f242-43cc-b598-a8571134461d","Type":"ContainerDied","Data":"2cadb450952d6c309e5822d7d6a02955d789fa7e47ed7bbbad41e03c6c35f6f3"} Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.857044 4756 generic.go:334] "Generic (PLEG): container finished" podID="5245508c-ca61-4b00-8909-21b4abc6458a" containerID="4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853" exitCode=0 Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.858218 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gl4z4" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.859314 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4z4" event={"ID":"5245508c-ca61-4b00-8909-21b4abc6458a","Type":"ContainerDied","Data":"4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853"} Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.859378 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gl4z4" event={"ID":"5245508c-ca61-4b00-8909-21b4abc6458a","Type":"ContainerDied","Data":"28f6a3adf3ed778ade14058abfbd40b8f56eb7a2b71e9b71f499115fcc481985"} Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.886012 4756 scope.go:117] "RemoveContainer" containerID="0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.925329 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t7jz2"] Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.933302 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t7jz2"] Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.939081 4756 scope.go:117] "RemoveContainer" containerID="caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.939718 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gl4z4"] Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.946513 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gl4z4"] Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.962826 4756 scope.go:117] "RemoveContainer" containerID="28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79" Feb 24 00:09:00 crc kubenswrapper[4756]: E0224 00:09:00.964011 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79\": container with ID starting with 28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79 not found: ID does not exist" containerID="28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.964105 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79"} err="failed to get container status \"28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79\": rpc error: code = NotFound desc = could not find container \"28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79\": container with ID starting with 28ce1c8ed56b498bacb59668b250aa5ad6e96229a0a5f6ab7fb2c3cf28f09a79 not found: ID does not exist" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.964154 4756 scope.go:117] "RemoveContainer" containerID="0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad" Feb 24 00:09:00 crc kubenswrapper[4756]: E0224 00:09:00.965504 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad\": container with ID starting with 0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad not found: ID does not exist" containerID="0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.965527 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad"} err="failed to get container status \"0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad\": rpc error: code = NotFound desc = could not find container \"0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad\": container with ID starting with 0e946e8bd96b1f1e9b96405b8b6e404883d306275a574077928c9deff5a478ad not found: ID does not exist" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.965540 4756 scope.go:117] "RemoveContainer" containerID="caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164" Feb 24 00:09:00 crc kubenswrapper[4756]: E0224 00:09:00.966702 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164\": container with ID starting with caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164 not found: ID does not exist" containerID="caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.966777 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164"} err="failed to get container status \"caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164\": rpc error: code = NotFound desc = could not find container \"caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164\": container with ID starting with caa92e03500c8e149759c94d9219592c01d887b8af65232a543f203483316164 not found: ID does not exist" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.966827 4756 scope.go:117] "RemoveContainer" containerID="4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853" Feb 24 00:09:00 crc kubenswrapper[4756]: I0224 00:09:00.991663 4756 scope.go:117] "RemoveContainer" containerID="b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.014330 4756 scope.go:117] "RemoveContainer" containerID="31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.042379 4756 scope.go:117] "RemoveContainer" containerID="4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853" Feb 24 00:09:01 crc kubenswrapper[4756]: E0224 00:09:01.043199 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853\": container with ID starting with 4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853 not found: ID does not exist" containerID="4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.043264 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853"} err="failed to get container status \"4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853\": rpc error: code = NotFound desc = could not find container \"4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853\": container with ID starting with 4b68fe1c4d7e4725a4507b2cb45e8961066fc0d62b87d8119b8a94c3c803f853 not found: ID does not exist" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.043311 4756 scope.go:117] "RemoveContainer" containerID="b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146" Feb 24 00:09:01 crc kubenswrapper[4756]: E0224 00:09:01.043984 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146\": container with ID starting with b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146 not found: ID does not exist" containerID="b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.044045 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146"} err="failed to get container status \"b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146\": rpc error: code = NotFound desc = could not find container \"b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146\": container with ID starting with b5d4480328fd7cd1bc7cbbe4057e9fc664e2884ca3c9e073a7af4b3238aed146 not found: ID does not exist" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.044101 4756 scope.go:117] "RemoveContainer" containerID="31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece" Feb 24 00:09:01 crc kubenswrapper[4756]: E0224 00:09:01.044710 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece\": container with ID starting with 31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece not found: ID does not exist" containerID="31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.044758 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece"} err="failed to get container status \"31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece\": rpc error: code = NotFound desc = could not find container \"31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece\": container with ID starting with 31b8177e2c896b91cfb7f2352e06c1dfbe7c4418bd84bd11a103b51af50daece not found: ID does not exist" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.106845 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5w42"] Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.107289 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5w42" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="registry-server" containerID="cri-o://b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c" gracePeriod=2 Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.498248 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.626193 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-catalog-content\") pod \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.626258 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5jd8\" (UniqueName: \"kubernetes.io/projected/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-kube-api-access-c5jd8\") pod \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.626349 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-utilities\") pod \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\" (UID: \"4f1e9439-3efd-4fdc-97ec-a0864e307ec9\") " Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.627252 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-utilities" (OuterVolumeSpecName: "utilities") pod "4f1e9439-3efd-4fdc-97ec-a0864e307ec9" (UID: "4f1e9439-3efd-4fdc-97ec-a0864e307ec9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.631542 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-kube-api-access-c5jd8" (OuterVolumeSpecName: "kube-api-access-c5jd8") pod "4f1e9439-3efd-4fdc-97ec-a0864e307ec9" (UID: "4f1e9439-3efd-4fdc-97ec-a0864e307ec9"). InnerVolumeSpecName "kube-api-access-c5jd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.657200 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f1e9439-3efd-4fdc-97ec-a0864e307ec9" (UID: "4f1e9439-3efd-4fdc-97ec-a0864e307ec9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.727795 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.727842 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5jd8\" (UniqueName: \"kubernetes.io/projected/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-kube-api-access-c5jd8\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.727852 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f1e9439-3efd-4fdc-97ec-a0864e307ec9-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.846483 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" path="/var/lib/kubelet/pods/5245508c-ca61-4b00-8909-21b4abc6458a/volumes" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.847369 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789b9f46-f242-43cc-b598-a8571134461d" path="/var/lib/kubelet/pods/789b9f46-f242-43cc-b598-a8571134461d/volumes" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.868494 4756 generic.go:334] "Generic (PLEG): container finished" podID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerID="b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c" exitCode=0 Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.868582 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5w42" event={"ID":"4f1e9439-3efd-4fdc-97ec-a0864e307ec9","Type":"ContainerDied","Data":"b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c"} Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.868629 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5w42" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.868674 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5w42" event={"ID":"4f1e9439-3efd-4fdc-97ec-a0864e307ec9","Type":"ContainerDied","Data":"b8f6d42ebc68bf3b4dba93ab758a3d2307ae6e69ff0bdbdd62cc750a8bf6c1b6"} Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.868722 4756 scope.go:117] "RemoveContainer" containerID="b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.884559 4756 scope.go:117] "RemoveContainer" containerID="bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.898474 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5w42"] Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.905645 4756 scope.go:117] "RemoveContainer" containerID="40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.906217 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5w42"] Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.922640 4756 scope.go:117] "RemoveContainer" containerID="b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c" Feb 24 00:09:01 crc kubenswrapper[4756]: E0224 00:09:01.923017 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c\": container with ID starting with b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c not found: ID does not exist" containerID="b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.923063 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c"} err="failed to get container status \"b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c\": rpc error: code = NotFound desc = could not find container \"b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c\": container with ID starting with b69735272ec3aacd615032a123f5c2ae79fcfbeeec33086e6755b1f9ad05007c not found: ID does not exist" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.923116 4756 scope.go:117] "RemoveContainer" containerID="bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6" Feb 24 00:09:01 crc kubenswrapper[4756]: E0224 00:09:01.923447 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6\": container with ID starting with bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6 not found: ID does not exist" containerID="bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.923478 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6"} err="failed to get container status \"bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6\": rpc error: code = NotFound desc = could not find container \"bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6\": container with ID starting with bb10520d8cf8796b5bc8920208a4003f0154f9b99df0f58f79d9157d65c0a9c6 not found: ID does not exist" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.923496 4756 scope.go:117] "RemoveContainer" containerID="40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431" Feb 24 00:09:01 crc kubenswrapper[4756]: E0224 00:09:01.923792 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431\": container with ID starting with 40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431 not found: ID does not exist" containerID="40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431" Feb 24 00:09:01 crc kubenswrapper[4756]: I0224 00:09:01.923818 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431"} err="failed to get container status \"40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431\": rpc error: code = NotFound desc = could not find container \"40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431\": container with ID starting with 40bdc5ce27ecd77c0b95b53c658c126ff2fd2ab01da179ac48c48064eb5e9431 not found: ID does not exist" Feb 24 00:09:03 crc kubenswrapper[4756]: I0224 00:09:03.700371 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p54ks"] Feb 24 00:09:03 crc kubenswrapper[4756]: I0224 00:09:03.701021 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p54ks" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="registry-server" containerID="cri-o://e48c7be0515108d945e282d59a56892d4e48c74ed09a5e438bac96506f26c6c6" gracePeriod=2 Feb 24 00:09:03 crc kubenswrapper[4756]: I0224 00:09:03.855344 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" path="/var/lib/kubelet/pods/4f1e9439-3efd-4fdc-97ec-a0864e307ec9/volumes" Feb 24 00:09:03 crc kubenswrapper[4756]: I0224 00:09:03.887991 4756 generic.go:334] "Generic (PLEG): container finished" podID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerID="e48c7be0515108d945e282d59a56892d4e48c74ed09a5e438bac96506f26c6c6" exitCode=0 Feb 24 00:09:03 crc kubenswrapper[4756]: I0224 00:09:03.888488 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p54ks" event={"ID":"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782","Type":"ContainerDied","Data":"e48c7be0515108d945e282d59a56892d4e48c74ed09a5e438bac96506f26c6c6"} Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.038667 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.164461 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj98q\" (UniqueName: \"kubernetes.io/projected/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-kube-api-access-mj98q\") pod \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.164569 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-catalog-content\") pod \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.164621 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-utilities\") pod \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\" (UID: \"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782\") " Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.165949 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-utilities" (OuterVolumeSpecName: "utilities") pod "2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" (UID: "2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.166137 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.170731 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-kube-api-access-mj98q" (OuterVolumeSpecName: "kube-api-access-mj98q") pod "2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" (UID: "2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782"). InnerVolumeSpecName "kube-api-access-mj98q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.267951 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj98q\" (UniqueName: \"kubernetes.io/projected/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-kube-api-access-mj98q\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.277810 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" (UID: "2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.369755 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.900909 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p54ks" event={"ID":"2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782","Type":"ContainerDied","Data":"442e57e9d1fc36613052788edddeb211bf4cba33a3769d771a39464481f5a4ca"} Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.901009 4756 scope.go:117] "RemoveContainer" containerID="e48c7be0515108d945e282d59a56892d4e48c74ed09a5e438bac96506f26c6c6" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.901030 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p54ks" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.927004 4756 scope.go:117] "RemoveContainer" containerID="e540c810bc48901ee29d5ef2bd960dd338208359092c271646760af9f6f7c38e" Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.948820 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p54ks"] Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.953706 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p54ks"] Feb 24 00:09:04 crc kubenswrapper[4756]: I0224 00:09:04.970429 4756 scope.go:117] "RemoveContainer" containerID="05fde111fb74f7453f4b104a8abc13d2242880114f65ddf6e2a539ad1635b002" Feb 24 00:09:05 crc kubenswrapper[4756]: I0224 00:09:05.842576 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" path="/var/lib/kubelet/pods/2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782/volumes" Feb 24 00:09:09 crc kubenswrapper[4756]: I0224 00:09:09.907795 4756 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod68e02cbc-c03c-45cd-916a-16dd3b0052cd"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod68e02cbc-c03c-45cd-916a-16dd3b0052cd] : Timed out while waiting for systemd to remove kubepods-burstable-pod68e02cbc_c03c_45cd_916a_16dd3b0052cd.slice" Feb 24 00:09:09 crc kubenswrapper[4756]: E0224 00:09:09.908374 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod68e02cbc-c03c-45cd-916a-16dd3b0052cd] : unable to destroy cgroup paths for cgroup [kubepods burstable pod68e02cbc-c03c-45cd-916a-16dd3b0052cd] : Timed out while waiting for systemd to remove kubepods-burstable-pod68e02cbc_c03c_45cd_916a_16dd3b0052cd.slice" pod="openshift-image-registry/image-pruner-29531520-pdn56" podUID="68e02cbc-c03c-45cd-916a-16dd3b0052cd" Feb 24 00:09:09 crc kubenswrapper[4756]: I0224 00:09:09.942034 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29531520-pdn56" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.200025 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" podUID="e5cdd0fc-4966-4022-b2c0-3eb556f083b0" containerName="oauth-openshift" containerID="cri-o://84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635" gracePeriod=15 Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.617301 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654037 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58d58b5989-sl5dq"] Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654343 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654360 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654370 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="extract-utilities" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654376 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="extract-utilities" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654388 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789b9f46-f242-43cc-b598-a8571134461d" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654395 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="789b9f46-f242-43cc-b598-a8571134461d" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654403 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789b9f46-f242-43cc-b598-a8571134461d" containerName="extract-utilities" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654410 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="789b9f46-f242-43cc-b598-a8571134461d" containerName="extract-utilities" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654417 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" containerName="extract-content" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654424 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" containerName="extract-content" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654436 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="extract-content" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654442 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="extract-content" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654460 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5cdd0fc-4966-4022-b2c0-3eb556f083b0" containerName="oauth-openshift" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654485 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5cdd0fc-4966-4022-b2c0-3eb556f083b0" containerName="oauth-openshift" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654498 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654506 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654522 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" containerName="extract-utilities" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654532 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" containerName="extract-utilities" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654547 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789b9f46-f242-43cc-b598-a8571134461d" containerName="extract-content" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654557 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="789b9f46-f242-43cc-b598-a8571134461d" containerName="extract-content" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654568 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="extract-content" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654577 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="extract-content" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654589 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654597 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: E0224 00:09:13.654609 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="extract-utilities" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654617 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="extract-utilities" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654756 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="789b9f46-f242-43cc-b598-a8571134461d" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654777 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f1e9439-3efd-4fdc-97ec-a0864e307ec9" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654790 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5cdd0fc-4966-4022-b2c0-3eb556f083b0" containerName="oauth-openshift" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654799 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca2f2f0-b5ff-4d3d-8e39-03fb0fa6c782" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.654813 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5245508c-ca61-4b00-8909-21b4abc6458a" containerName="registry-server" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.655288 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.678475 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d58b5989-sl5dq"] Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745191 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-login\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745339 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-router-certs\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745399 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-idp-0-file-data\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745445 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-provider-selection\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745484 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-serving-cert\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745534 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-cliconfig\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745572 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-dir\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745616 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-trusted-ca-bundle\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745688 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-error\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745730 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-service-ca\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745815 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-policies\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745856 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-ocp-branding-template\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745896 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-session\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.745939 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6ws6\" (UniqueName: \"kubernetes.io/projected/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-kube-api-access-f6ws6\") pod \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\" (UID: \"e5cdd0fc-4966-4022-b2c0-3eb556f083b0\") " Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746191 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-session\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746229 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746286 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-login\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746354 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhrb\" (UniqueName: \"kubernetes.io/projected/b882f8d5-908f-428f-9008-24711999ef7c-kube-api-access-zzhrb\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746395 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746434 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746469 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-error\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b882f8d5-908f-428f-9008-24711999ef7c-audit-dir\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746579 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746716 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-audit-policies\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746761 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746796 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.746828 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.748130 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.748406 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.747623 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.748919 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.749544 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.754381 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.754582 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-kube-api-access-f6ws6" (OuterVolumeSpecName: "kube-api-access-f6ws6") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "kube-api-access-f6ws6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.755291 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.755592 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.755896 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.756735 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.756884 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.757311 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.757797 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e5cdd0fc-4966-4022-b2c0-3eb556f083b0" (UID: "e5cdd0fc-4966-4022-b2c0-3eb556f083b0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.848874 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b882f8d5-908f-428f-9008-24711999ef7c-audit-dir\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.848950 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.848994 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849051 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-audit-policies\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849095 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b882f8d5-908f-428f-9008-24711999ef7c-audit-dir\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849111 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849262 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849301 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849394 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-session\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849419 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849484 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-login\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849527 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhrb\" (UniqueName: \"kubernetes.io/projected/b882f8d5-908f-428f-9008-24711999ef7c-kube-api-access-zzhrb\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849551 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849583 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-error\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849612 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849722 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849738 4756 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849749 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849762 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849776 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6ws6\" (UniqueName: \"kubernetes.io/projected/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-kube-api-access-f6ws6\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849788 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849800 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849812 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849825 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849836 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849848 4756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849860 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849872 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.849885 4756 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e5cdd0fc-4966-4022-b2c0-3eb556f083b0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.850408 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.851802 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.851871 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.851989 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b882f8d5-908f-428f-9008-24711999ef7c-audit-policies\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.854953 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.855256 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-error\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.856706 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-session\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.856931 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.857985 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-login\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.858145 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.859182 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.869750 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b882f8d5-908f-428f-9008-24711999ef7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.881999 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhrb\" (UniqueName: \"kubernetes.io/projected/b882f8d5-908f-428f-9008-24711999ef7c-kube-api-access-zzhrb\") pod \"oauth-openshift-58d58b5989-sl5dq\" (UID: \"b882f8d5-908f-428f-9008-24711999ef7c\") " pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.973592 4756 generic.go:334] "Generic (PLEG): container finished" podID="e5cdd0fc-4966-4022-b2c0-3eb556f083b0" containerID="84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635" exitCode=0 Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.973675 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" event={"ID":"e5cdd0fc-4966-4022-b2c0-3eb556f083b0","Type":"ContainerDied","Data":"84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635"} Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.973734 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" event={"ID":"e5cdd0fc-4966-4022-b2c0-3eb556f083b0","Type":"ContainerDied","Data":"7abd525132751f20dc44d5560bed568ee428df4edeed87133123aba1a4d8a2ec"} Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.973732 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5qkbl" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.973770 4756 scope.go:117] "RemoveContainer" containerID="84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635" Feb 24 00:09:13 crc kubenswrapper[4756]: I0224 00:09:13.994199 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:14 crc kubenswrapper[4756]: I0224 00:09:14.011211 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5qkbl"] Feb 24 00:09:14 crc kubenswrapper[4756]: I0224 00:09:14.016966 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5qkbl"] Feb 24 00:09:14 crc kubenswrapper[4756]: I0224 00:09:14.027938 4756 scope.go:117] "RemoveContainer" containerID="84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635" Feb 24 00:09:14 crc kubenswrapper[4756]: E0224 00:09:14.030905 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635\": container with ID starting with 84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635 not found: ID does not exist" containerID="84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635" Feb 24 00:09:14 crc kubenswrapper[4756]: I0224 00:09:14.030977 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635"} err="failed to get container status \"84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635\": rpc error: code = NotFound desc = could not find container \"84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635\": container with ID starting with 84ad58378490063fb0c5d03afbb53f96bf8fc072832e08536a4946d32d976635 not found: ID does not exist" Feb 24 00:09:14 crc kubenswrapper[4756]: I0224 00:09:14.268817 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58d58b5989-sl5dq"] Feb 24 00:09:14 crc kubenswrapper[4756]: W0224 00:09:14.277956 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb882f8d5_908f_428f_9008_24711999ef7c.slice/crio-a60f073c597b36549cbb9f3264f5b86e218ea2dae9d46f769614c7ad756087fc WatchSource:0}: Error finding container a60f073c597b36549cbb9f3264f5b86e218ea2dae9d46f769614c7ad756087fc: Status 404 returned error can't find the container with id a60f073c597b36549cbb9f3264f5b86e218ea2dae9d46f769614c7ad756087fc Feb 24 00:09:14 crc kubenswrapper[4756]: I0224 00:09:14.987172 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" event={"ID":"b882f8d5-908f-428f-9008-24711999ef7c","Type":"ContainerStarted","Data":"7f6b6924d860ed0480b1bccf3002b70739be62170cf59c0696d5db71bfaa5070"} Feb 24 00:09:14 crc kubenswrapper[4756]: I0224 00:09:14.987748 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" event={"ID":"b882f8d5-908f-428f-9008-24711999ef7c","Type":"ContainerStarted","Data":"a60f073c597b36549cbb9f3264f5b86e218ea2dae9d46f769614c7ad756087fc"} Feb 24 00:09:14 crc kubenswrapper[4756]: I0224 00:09:14.988156 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:15 crc kubenswrapper[4756]: I0224 00:09:15.017471 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" Feb 24 00:09:15 crc kubenswrapper[4756]: I0224 00:09:15.023561 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58d58b5989-sl5dq" podStartSLOduration=27.023525136 podStartE2EDuration="27.023525136s" podCreationTimestamp="2026-02-24 00:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:15.021172435 +0000 UTC m=+211.932035088" watchObservedRunningTime="2026-02-24 00:09:15.023525136 +0000 UTC m=+211.934387839" Feb 24 00:09:15 crc kubenswrapper[4756]: I0224 00:09:15.841784 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5cdd0fc-4966-4022-b2c0-3eb556f083b0" path="/var/lib/kubelet/pods/e5cdd0fc-4966-4022-b2c0-3eb556f083b0/volumes" Feb 24 00:09:19 crc kubenswrapper[4756]: I0224 00:09:19.082153 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 24 00:09:22 crc kubenswrapper[4756]: I0224 00:09:22.711758 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:09:22 crc kubenswrapper[4756]: I0224 00:09:22.712267 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.893147 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z94cj"] Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.894133 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z94cj" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerName="registry-server" containerID="cri-o://88a6cc38612d943de3a99d73f8432f95a2e1d2261b877670bd4c071bced9dbeb" gracePeriod=30 Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.935260 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t9hch"] Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.935927 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t9hch" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerName="registry-server" containerID="cri-o://9fa074e18c5dc651dc3ce9dc7f75c47eb946f7ca3e5c1c837ea430733a43e735" gracePeriod=30 Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.977157 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j7xwn"] Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.977803 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" podUID="11fea2b9-4369-4534-8101-5fc365d29723" containerName="marketplace-operator" containerID="cri-o://bf4d91725d152a98dab32da45384c9ff6fafeb9ae1ccbef27c9a3e7680af73e8" gracePeriod=30 Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.980303 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xptg"] Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.980760 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2xptg" podUID="5584028b-2ad5-445c-b32a-0a342a022265" containerName="registry-server" containerID="cri-o://88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046" gracePeriod=30 Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.984900 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5ztj"] Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.985846 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.987726 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcv6x"] Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.988029 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zcv6x" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="registry-server" containerID="cri-o://af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13" gracePeriod=30 Feb 24 00:09:29 crc kubenswrapper[4756]: I0224 00:09:29.991571 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5ztj"] Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.110955 4756 generic.go:334] "Generic (PLEG): container finished" podID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerID="9fa074e18c5dc651dc3ce9dc7f75c47eb946f7ca3e5c1c837ea430733a43e735" exitCode=0 Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.111042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9hch" event={"ID":"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4","Type":"ContainerDied","Data":"9fa074e18c5dc651dc3ce9dc7f75c47eb946f7ca3e5c1c837ea430733a43e735"} Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.112824 4756 generic.go:334] "Generic (PLEG): container finished" podID="11fea2b9-4369-4534-8101-5fc365d29723" containerID="bf4d91725d152a98dab32da45384c9ff6fafeb9ae1ccbef27c9a3e7680af73e8" exitCode=0 Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.112894 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" event={"ID":"11fea2b9-4369-4534-8101-5fc365d29723","Type":"ContainerDied","Data":"bf4d91725d152a98dab32da45384c9ff6fafeb9ae1ccbef27c9a3e7680af73e8"} Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.115712 4756 generic.go:334] "Generic (PLEG): container finished" podID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerID="88a6cc38612d943de3a99d73f8432f95a2e1d2261b877670bd4c071bced9dbeb" exitCode=0 Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.115739 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z94cj" event={"ID":"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7","Type":"ContainerDied","Data":"88a6cc38612d943de3a99d73f8432f95a2e1d2261b877670bd4c071bced9dbeb"} Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.135600 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69b6689d-ae32-4f32-a088-588b657e42ce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.135651 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69b6689d-ae32-4f32-a088-588b657e42ce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.135692 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfhfb\" (UniqueName: \"kubernetes.io/projected/69b6689d-ae32-4f32-a088-588b657e42ce-kube-api-access-jfhfb\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.237693 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfhfb\" (UniqueName: \"kubernetes.io/projected/69b6689d-ae32-4f32-a088-588b657e42ce-kube-api-access-jfhfb\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.237797 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69b6689d-ae32-4f32-a088-588b657e42ce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.237827 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69b6689d-ae32-4f32-a088-588b657e42ce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.240126 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69b6689d-ae32-4f32-a088-588b657e42ce-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.258606 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfhfb\" (UniqueName: \"kubernetes.io/projected/69b6689d-ae32-4f32-a088-588b657e42ce-kube-api-access-jfhfb\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.260699 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/69b6689d-ae32-4f32-a088-588b657e42ce-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r5ztj\" (UID: \"69b6689d-ae32-4f32-a088-588b657e42ce\") " pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.405757 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.409312 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.420910 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.544504 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5xrv\" (UniqueName: \"kubernetes.io/projected/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-kube-api-access-p5xrv\") pod \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.544578 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdtbx\" (UniqueName: \"kubernetes.io/projected/11fea2b9-4369-4534-8101-5fc365d29723-kube-api-access-pdtbx\") pod \"11fea2b9-4369-4534-8101-5fc365d29723\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.544644 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-operator-metrics\") pod \"11fea2b9-4369-4534-8101-5fc365d29723\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.544707 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-trusted-ca\") pod \"11fea2b9-4369-4534-8101-5fc365d29723\" (UID: \"11fea2b9-4369-4534-8101-5fc365d29723\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.544734 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-catalog-content\") pod \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.544774 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-utilities\") pod \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\" (UID: \"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.546322 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-utilities" (OuterVolumeSpecName: "utilities") pod "4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" (UID: "4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.546687 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "11fea2b9-4369-4534-8101-5fc365d29723" (UID: "11fea2b9-4369-4534-8101-5fc365d29723"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.577040 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-kube-api-access-p5xrv" (OuterVolumeSpecName: "kube-api-access-p5xrv") pod "4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" (UID: "4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7"). InnerVolumeSpecName "kube-api-access-p5xrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.578291 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "11fea2b9-4369-4534-8101-5fc365d29723" (UID: "11fea2b9-4369-4534-8101-5fc365d29723"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.580195 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11fea2b9-4369-4534-8101-5fc365d29723-kube-api-access-pdtbx" (OuterVolumeSpecName: "kube-api-access-pdtbx") pod "11fea2b9-4369-4534-8101-5fc365d29723" (UID: "11fea2b9-4369-4534-8101-5fc365d29723"). InnerVolumeSpecName "kube-api-access-pdtbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.636862 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" (UID: "4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.655339 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.655394 4756 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/11fea2b9-4369-4534-8101-5fc365d29723-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.655408 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.655421 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.655435 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5xrv\" (UniqueName: \"kubernetes.io/projected/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7-kube-api-access-p5xrv\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.655447 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdtbx\" (UniqueName: \"kubernetes.io/projected/11fea2b9-4369-4534-8101-5fc365d29723-kube-api-access-pdtbx\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.678262 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.700121 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.705542 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.758470 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-catalog-content\") pod \"5584028b-2ad5-445c-b32a-0a342a022265\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.758718 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zrr6\" (UniqueName: \"kubernetes.io/projected/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-kube-api-access-6zrr6\") pod \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.758847 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-catalog-content\") pod \"e83e15d6-954c-439b-b14e-78527bac2d45\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.758912 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-utilities\") pod \"5584028b-2ad5-445c-b32a-0a342a022265\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.759008 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klnrp\" (UniqueName: \"kubernetes.io/projected/5584028b-2ad5-445c-b32a-0a342a022265-kube-api-access-klnrp\") pod \"5584028b-2ad5-445c-b32a-0a342a022265\" (UID: \"5584028b-2ad5-445c-b32a-0a342a022265\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.759087 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-utilities\") pod \"e83e15d6-954c-439b-b14e-78527bac2d45\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.759170 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-utilities\") pod \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.759252 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4hbk\" (UniqueName: \"kubernetes.io/projected/e83e15d6-954c-439b-b14e-78527bac2d45-kube-api-access-h4hbk\") pod \"e83e15d6-954c-439b-b14e-78527bac2d45\" (UID: \"e83e15d6-954c-439b-b14e-78527bac2d45\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.759365 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-catalog-content\") pod \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\" (UID: \"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4\") " Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.760586 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-utilities" (OuterVolumeSpecName: "utilities") pod "5584028b-2ad5-445c-b32a-0a342a022265" (UID: "5584028b-2ad5-445c-b32a-0a342a022265"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.760693 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-utilities" (OuterVolumeSpecName: "utilities") pod "176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" (UID: "176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.761725 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.761759 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.765195 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5584028b-2ad5-445c-b32a-0a342a022265-kube-api-access-klnrp" (OuterVolumeSpecName: "kube-api-access-klnrp") pod "5584028b-2ad5-445c-b32a-0a342a022265" (UID: "5584028b-2ad5-445c-b32a-0a342a022265"). InnerVolumeSpecName "kube-api-access-klnrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.780245 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e83e15d6-954c-439b-b14e-78527bac2d45-kube-api-access-h4hbk" (OuterVolumeSpecName: "kube-api-access-h4hbk") pod "e83e15d6-954c-439b-b14e-78527bac2d45" (UID: "e83e15d6-954c-439b-b14e-78527bac2d45"). InnerVolumeSpecName "kube-api-access-h4hbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.791177 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-kube-api-access-6zrr6" (OuterVolumeSpecName: "kube-api-access-6zrr6") pod "176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" (UID: "176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4"). InnerVolumeSpecName "kube-api-access-6zrr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.799178 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-utilities" (OuterVolumeSpecName: "utilities") pod "e83e15d6-954c-439b-b14e-78527bac2d45" (UID: "e83e15d6-954c-439b-b14e-78527bac2d45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.831172 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5584028b-2ad5-445c-b32a-0a342a022265" (UID: "5584028b-2ad5-445c-b32a-0a342a022265"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.862924 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4hbk\" (UniqueName: \"kubernetes.io/projected/e83e15d6-954c-439b-b14e-78527bac2d45-kube-api-access-h4hbk\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.862982 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5584028b-2ad5-445c-b32a-0a342a022265-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.862997 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zrr6\" (UniqueName: \"kubernetes.io/projected/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-kube-api-access-6zrr6\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.863016 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klnrp\" (UniqueName: \"kubernetes.io/projected/5584028b-2ad5-445c-b32a-0a342a022265-kube-api-access-klnrp\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.863028 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.892917 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" (UID: "176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.921753 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r5ztj"] Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.964430 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:30 crc kubenswrapper[4756]: I0224 00:09:30.965931 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e83e15d6-954c-439b-b14e-78527bac2d45" (UID: "e83e15d6-954c-439b-b14e-78527bac2d45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.066110 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e83e15d6-954c-439b-b14e-78527bac2d45-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.124506 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9hch" event={"ID":"176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4","Type":"ContainerDied","Data":"28e00606df81738f5d28d176766adf976b421e421eb54cb99623281ba24ade77"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.124609 4756 scope.go:117] "RemoveContainer" containerID="9fa074e18c5dc651dc3ce9dc7f75c47eb946f7ca3e5c1c837ea430733a43e735" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.124535 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9hch" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.127006 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" event={"ID":"11fea2b9-4369-4534-8101-5fc365d29723","Type":"ContainerDied","Data":"4e30000b918fd7552256f6e7977960c65680d28d48504888bd6874190080438d"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.127130 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-j7xwn" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.131414 4756 generic.go:334] "Generic (PLEG): container finished" podID="5584028b-2ad5-445c-b32a-0a342a022265" containerID="88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046" exitCode=0 Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.131486 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xptg" event={"ID":"5584028b-2ad5-445c-b32a-0a342a022265","Type":"ContainerDied","Data":"88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.131518 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2xptg" event={"ID":"5584028b-2ad5-445c-b32a-0a342a022265","Type":"ContainerDied","Data":"cf293041dc71ba6cb7bc08e98713d06257f116d842a692d424c37ed828567ba6"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.131592 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2xptg" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.135711 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z94cj" event={"ID":"4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7","Type":"ContainerDied","Data":"ec2d399c28f6cafa0bba31dbbe45f0d60b250ab05e7fcb8b9a89f1b86934fda3"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.135818 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z94cj" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.137348 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" event={"ID":"69b6689d-ae32-4f32-a088-588b657e42ce","Type":"ContainerStarted","Data":"2eefb364bb0347fb16f26b474b523888c53b50f2bde3c5fd7a1f2b70dbfbe35b"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.137418 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" event={"ID":"69b6689d-ae32-4f32-a088-588b657e42ce","Type":"ContainerStarted","Data":"0a7a5f723a69621e642951390e67a8d4b23e3709610a0c48b832d00a1b14dd5e"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.138096 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.139916 4756 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r5ztj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.139984 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.141551 4756 generic.go:334] "Generic (PLEG): container finished" podID="e83e15d6-954c-439b-b14e-78527bac2d45" containerID="af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13" exitCode=0 Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.141666 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcv6x" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.141654 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcv6x" event={"ID":"e83e15d6-954c-439b-b14e-78527bac2d45","Type":"ContainerDied","Data":"af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.141731 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcv6x" event={"ID":"e83e15d6-954c-439b-b14e-78527bac2d45","Type":"ContainerDied","Data":"568a32edcea383571e16973a06878f42139241ae25d752022dfa8e1a3f54dc78"} Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.141882 4756 scope.go:117] "RemoveContainer" containerID="6013f27e2d149dcc61355664c250709bbcef0f19a8c1ff8889c317e558de36a0" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.177377 4756 scope.go:117] "RemoveContainer" containerID="860ce08ae32352a059da4922cd8e3f0d96160e638576a492bec0d32ba4e100d8" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.202197 4756 scope.go:117] "RemoveContainer" containerID="bf4d91725d152a98dab32da45384c9ff6fafeb9ae1ccbef27c9a3e7680af73e8" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.207418 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" podStartSLOduration=2.207384544 podStartE2EDuration="2.207384544s" podCreationTimestamp="2026-02-24 00:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:09:31.166409017 +0000 UTC m=+228.077271670" watchObservedRunningTime="2026-02-24 00:09:31.207384544 +0000 UTC m=+228.118247177" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.218750 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j7xwn"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.226184 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-j7xwn"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.229131 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t9hch"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.234712 4756 scope.go:117] "RemoveContainer" containerID="88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.234871 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t9hch"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.241325 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xptg"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.245462 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2xptg"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.256250 4756 scope.go:117] "RemoveContainer" containerID="9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.265624 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z94cj"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.274399 4756 scope.go:117] "RemoveContainer" containerID="5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.279763 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z94cj"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.289716 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcv6x"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.292972 4756 scope.go:117] "RemoveContainer" containerID="88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.293594 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046\": container with ID starting with 88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046 not found: ID does not exist" containerID="88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.293643 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046"} err="failed to get container status \"88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046\": rpc error: code = NotFound desc = could not find container \"88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046\": container with ID starting with 88b6dfddeb0e87376608fb8da5cafe9747cac1acc4735d7eb5573f0c99247046 not found: ID does not exist" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.293678 4756 scope.go:117] "RemoveContainer" containerID="9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.293850 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zcv6x"] Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.293933 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58\": container with ID starting with 9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58 not found: ID does not exist" containerID="9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.294124 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58"} err="failed to get container status \"9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58\": rpc error: code = NotFound desc = could not find container \"9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58\": container with ID starting with 9270919b1bc6b549d19242f40d99375f5b5ecd47990d48c7b2947c371b558f58 not found: ID does not exist" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.294232 4756 scope.go:117] "RemoveContainer" containerID="5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.294704 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959\": container with ID starting with 5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959 not found: ID does not exist" containerID="5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.294740 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959"} err="failed to get container status \"5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959\": rpc error: code = NotFound desc = could not find container \"5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959\": container with ID starting with 5e16b157c80ea50e06005e05b4f0aea0e6bc8e150e89a2199af47f1ab837c959 not found: ID does not exist" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.294760 4756 scope.go:117] "RemoveContainer" containerID="88a6cc38612d943de3a99d73f8432f95a2e1d2261b877670bd4c071bced9dbeb" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.309107 4756 scope.go:117] "RemoveContainer" containerID="6bd8e0b86721602e1f75747385dab730dec99960b93e5ca7b29e90381cb2a22b" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.327763 4756 scope.go:117] "RemoveContainer" containerID="e239d8e1f262aa02bbbe72ea503ab4b7883f43dc4655c083f31ad0d5be1cbb04" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.342649 4756 scope.go:117] "RemoveContainer" containerID="af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.358570 4756 scope.go:117] "RemoveContainer" containerID="044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.376230 4756 scope.go:117] "RemoveContainer" containerID="0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.378635 4756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.378852 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="extract-content" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.378871 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="extract-content" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.378884 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerName="extract-utilities" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.378892 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerName="extract-utilities" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.378902 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5584028b-2ad5-445c-b32a-0a342a022265" containerName="extract-utilities" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.378908 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5584028b-2ad5-445c-b32a-0a342a022265" containerName="extract-utilities" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.378919 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="extract-utilities" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.378927 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="extract-utilities" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.378936 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerName="extract-content" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.378942 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerName="extract-content" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.378949 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11fea2b9-4369-4534-8101-5fc365d29723" containerName="marketplace-operator" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.378956 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="11fea2b9-4369-4534-8101-5fc365d29723" containerName="marketplace-operator" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.378968 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerName="extract-content" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.378975 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerName="extract-content" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.379005 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5584028b-2ad5-445c-b32a-0a342a022265" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379012 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5584028b-2ad5-445c-b32a-0a342a022265" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.379020 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379026 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.379033 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5584028b-2ad5-445c-b32a-0a342a022265" containerName="extract-content" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379039 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="5584028b-2ad5-445c-b32a-0a342a022265" containerName="extract-content" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.379047 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379053 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.379078 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379086 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.379097 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerName="extract-utilities" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379104 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerName="extract-utilities" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379187 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="5584028b-2ad5-445c-b32a-0a342a022265" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379197 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379209 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="11fea2b9-4369-4534-8101-5fc365d29723" containerName="marketplace-operator" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379215 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379223 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" containerName="registry-server" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.379603 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.380380 4756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.380609 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e" gracePeriod=15 Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.380736 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d" gracePeriod=15 Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.380716 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185" gracePeriod=15 Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.380757 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550" gracePeriod=15 Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.380690 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e" gracePeriod=15 Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382446 4756 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.382791 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382814 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.382827 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382837 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.382851 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382858 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.382872 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382879 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.382892 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382898 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.382905 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382911 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.382919 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382926 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.382935 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.382943 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383104 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383121 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383141 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383151 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383168 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383183 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383192 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.383292 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383308 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.383432 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.436468 4756 scope.go:117] "RemoveContainer" containerID="af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.441248 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13\": container with ID starting with af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13 not found: ID does not exist" containerID="af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.441454 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13"} err="failed to get container status \"af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13\": rpc error: code = NotFound desc = could not find container \"af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13\": container with ID starting with af9068e40b24bac9039fae271edb7e7b602fd1ba2a99fdc262c924e46940bb13 not found: ID does not exist" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.441567 4756 scope.go:117] "RemoveContainer" containerID="044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.442111 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a\": container with ID starting with 044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a not found: ID does not exist" containerID="044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.442138 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a"} err="failed to get container status \"044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a\": rpc error: code = NotFound desc = could not find container \"044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a\": container with ID starting with 044d9f8b0c2e1b634bad53b3f27a671a3fe915cc0911fd5aecf34ab5bd6b450a not found: ID does not exist" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.442157 4756 scope.go:117] "RemoveContainer" containerID="0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8" Feb 24 00:09:31 crc kubenswrapper[4756]: E0224 00:09:31.442539 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8\": container with ID starting with 0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8 not found: ID does not exist" containerID="0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.442612 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8"} err="failed to get container status \"0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8\": rpc error: code = NotFound desc = could not find container \"0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8\": container with ID starting with 0f1ab1f7d79b813abdaafd52888b9fb54f745887bc5b7d6dd2051640a5a17ff8 not found: ID does not exist" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.473636 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.473705 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.473769 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.473800 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.473824 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.473842 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.473874 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.473899 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.497242 4756 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.497312 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.575440 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.575661 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.575684 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.575814 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.575945 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.575991 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576035 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576152 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576160 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576224 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576253 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576309 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576334 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576351 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576378 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.576643 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.841613 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11fea2b9-4369-4534-8101-5fc365d29723" path="/var/lib/kubelet/pods/11fea2b9-4369-4534-8101-5fc365d29723/volumes" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.842153 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4" path="/var/lib/kubelet/pods/176e1cf3-1c1d-4cd4-9faa-f32b649ac3e4/volumes" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.842844 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7" path="/var/lib/kubelet/pods/4fa8e94f-4ca4-4259-8a05-9e18df3a1ff7/volumes" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.843465 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5584028b-2ad5-445c-b32a-0a342a022265" path="/var/lib/kubelet/pods/5584028b-2ad5-445c-b32a-0a342a022265/volumes" Feb 24 00:09:31 crc kubenswrapper[4756]: I0224 00:09:31.844027 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e83e15d6-954c-439b-b14e-78527bac2d45" path="/var/lib/kubelet/pods/e83e15d6-954c-439b-b14e-78527bac2d45/volumes" Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.155721 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.157197 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.158003 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185" exitCode=0 Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.158041 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d" exitCode=0 Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.158051 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550" exitCode=0 Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.158082 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e" exitCode=2 Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.158134 4756 scope.go:117] "RemoveContainer" containerID="7ef8ce5103ee58d1d4a7725a24ac49a20c30665b0b9e5135c14324a7b26d540c" Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.160350 4756 generic.go:334] "Generic (PLEG): container finished" podID="4f14689d-861e-4ded-a7b3-d250eec9093e" containerID="945ac1263b26caf9a3d4bf3fb384f8e100030b6a298f4e3af18644ba919e742a" exitCode=0 Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.160423 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4f14689d-861e-4ded-a7b3-d250eec9093e","Type":"ContainerDied","Data":"945ac1263b26caf9a3d4bf3fb384f8e100030b6a298f4e3af18644ba919e742a"} Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.161956 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.162164 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/0.log" Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.162206 4756 generic.go:334] "Generic (PLEG): container finished" podID="69b6689d-ae32-4f32-a088-588b657e42ce" containerID="2eefb364bb0347fb16f26b474b523888c53b50f2bde3c5fd7a1f2b70dbfbe35b" exitCode=1 Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.162246 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" event={"ID":"69b6689d-ae32-4f32-a088-588b657e42ce","Type":"ContainerDied","Data":"2eefb364bb0347fb16f26b474b523888c53b50f2bde3c5fd7a1f2b70dbfbe35b"} Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.162768 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.162814 4756 scope.go:117] "RemoveContainer" containerID="2eefb364bb0347fb16f26b474b523888c53b50f2bde3c5fd7a1f2b70dbfbe35b" Feb 24 00:09:32 crc kubenswrapper[4756]: I0224 00:09:32.163292 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:32 crc kubenswrapper[4756]: E0224 00:09:32.166895 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/marketplace-operator-79b997595-r5ztj.1897062d8333df20\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{marketplace-operator-79b997595-r5ztj.1897062d8333df20 openshift-marketplace 29634 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:marketplace-operator-79b997595-r5ztj,UID:69b6689d-ae32-4f32-a088-588b657e42ce,APIVersion:v1,ResourceVersion:29597,FieldPath:spec.containers{marketplace-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:30 +0000 UTC,LastTimestamp:2026-02-24 00:09:32.166335198 +0000 UTC m=+229.077197831,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.171752 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.174158 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/1.log" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.174602 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/0.log" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.174658 4756 generic.go:334] "Generic (PLEG): container finished" podID="69b6689d-ae32-4f32-a088-588b657e42ce" containerID="c6e78a2b70977ae84083ed0fa0a1f7bf367f5afae778cc1ca8c5e332541ff47a" exitCode=1 Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.174815 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" event={"ID":"69b6689d-ae32-4f32-a088-588b657e42ce","Type":"ContainerDied","Data":"c6e78a2b70977ae84083ed0fa0a1f7bf367f5afae778cc1ca8c5e332541ff47a"} Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.174963 4756 scope.go:117] "RemoveContainer" containerID="2eefb364bb0347fb16f26b474b523888c53b50f2bde3c5fd7a1f2b70dbfbe35b" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.175563 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.175695 4756 scope.go:117] "RemoveContainer" containerID="c6e78a2b70977ae84083ed0fa0a1f7bf367f5afae778cc1ca8c5e332541ff47a" Feb 24 00:09:33 crc kubenswrapper[4756]: E0224 00:09:33.175989 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-r5ztj_openshift-marketplace(69b6689d-ae32-4f32-a088-588b657e42ce)\"" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.176116 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.392848 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.393595 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.394143 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.397541 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f14689d-861e-4ded-a7b3-d250eec9093e-kube-api-access\") pod \"4f14689d-861e-4ded-a7b3-d250eec9093e\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.397661 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-kubelet-dir\") pod \"4f14689d-861e-4ded-a7b3-d250eec9093e\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.397727 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4f14689d-861e-4ded-a7b3-d250eec9093e" (UID: "4f14689d-861e-4ded-a7b3-d250eec9093e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.397740 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-var-lock\") pod \"4f14689d-861e-4ded-a7b3-d250eec9093e\" (UID: \"4f14689d-861e-4ded-a7b3-d250eec9093e\") " Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.397783 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-var-lock" (OuterVolumeSpecName: "var-lock") pod "4f14689d-861e-4ded-a7b3-d250eec9093e" (UID: "4f14689d-861e-4ded-a7b3-d250eec9093e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.398313 4756 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.398328 4756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4f14689d-861e-4ded-a7b3-d250eec9093e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.403947 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f14689d-861e-4ded-a7b3-d250eec9093e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4f14689d-861e-4ded-a7b3-d250eec9093e" (UID: "4f14689d-861e-4ded-a7b3-d250eec9093e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.499938 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f14689d-861e-4ded-a7b3-d250eec9093e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.743458 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.744403 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.745214 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.745633 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.745895 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.803142 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.803562 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.803662 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.803345 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.803590 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.803728 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.804168 4756 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.804293 4756 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.804387 4756 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.836244 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.836841 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.837204 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:33 crc kubenswrapper[4756]: I0224 00:09:33.844910 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.183153 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"4f14689d-861e-4ded-a7b3-d250eec9093e","Type":"ContainerDied","Data":"ef675dd6450c5236cc2931c4ac6d35681c89f09a0a4fdf0f891307bb5512d2e8"} Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.183193 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.183212 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef675dd6450c5236cc2931c4ac6d35681c89f09a0a4fdf0f891307bb5512d2e8" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.189243 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.190489 4756 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e" exitCode=0 Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.190616 4756 scope.go:117] "RemoveContainer" containerID="0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.190639 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.191476 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.191693 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.191905 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.194509 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/1.log" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.194880 4756 scope.go:117] "RemoveContainer" containerID="c6e78a2b70977ae84083ed0fa0a1f7bf367f5afae778cc1ca8c5e332541ff47a" Feb 24 00:09:34 crc kubenswrapper[4756]: E0224 00:09:34.195117 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-r5ztj_openshift-marketplace(69b6689d-ae32-4f32-a088-588b657e42ce)\"" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.196570 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.197126 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.197338 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.198403 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.198810 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.199185 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.199581 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.200051 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.200641 4756 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.211142 4756 scope.go:117] "RemoveContainer" containerID="e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.232818 4756 scope.go:117] "RemoveContainer" containerID="2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.249739 4756 scope.go:117] "RemoveContainer" containerID="aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.268848 4756 scope.go:117] "RemoveContainer" containerID="9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.292164 4756 scope.go:117] "RemoveContainer" containerID="0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.315235 4756 scope.go:117] "RemoveContainer" containerID="0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185" Feb 24 00:09:34 crc kubenswrapper[4756]: E0224 00:09:34.316594 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\": container with ID starting with 0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185 not found: ID does not exist" containerID="0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.316686 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185"} err="failed to get container status \"0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\": rpc error: code = NotFound desc = could not find container \"0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185\": container with ID starting with 0f8a85d0fa4e3417f78915d2487c7faf8401280c359601b8a3608b5f07445185 not found: ID does not exist" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.316738 4756 scope.go:117] "RemoveContainer" containerID="e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d" Feb 24 00:09:34 crc kubenswrapper[4756]: E0224 00:09:34.317795 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\": container with ID starting with e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d not found: ID does not exist" containerID="e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.317887 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d"} err="failed to get container status \"e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\": rpc error: code = NotFound desc = could not find container \"e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d\": container with ID starting with e67dcc495452c9cb5e7e4ba01031c06751f712bb0ccda33ee2fe76286f0fdb2d not found: ID does not exist" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.317931 4756 scope.go:117] "RemoveContainer" containerID="2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550" Feb 24 00:09:34 crc kubenswrapper[4756]: E0224 00:09:34.318493 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\": container with ID starting with 2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550 not found: ID does not exist" containerID="2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.318581 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550"} err="failed to get container status \"2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\": rpc error: code = NotFound desc = could not find container \"2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550\": container with ID starting with 2be424038d1c3272ea6a4afd7584bd6fc34d939271fe28fd014c351d5a463550 not found: ID does not exist" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.318670 4756 scope.go:117] "RemoveContainer" containerID="aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e" Feb 24 00:09:34 crc kubenswrapper[4756]: E0224 00:09:34.319220 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\": container with ID starting with aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e not found: ID does not exist" containerID="aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.319273 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e"} err="failed to get container status \"aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\": rpc error: code = NotFound desc = could not find container \"aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e\": container with ID starting with aacf6bc49638b57dee4bc70ef7d2a9809a42a8c907709e2f59618e1e3d361a5e not found: ID does not exist" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.319311 4756 scope.go:117] "RemoveContainer" containerID="9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e" Feb 24 00:09:34 crc kubenswrapper[4756]: E0224 00:09:34.320734 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\": container with ID starting with 9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e not found: ID does not exist" containerID="9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.320801 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e"} err="failed to get container status \"9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\": rpc error: code = NotFound desc = could not find container \"9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e\": container with ID starting with 9485bd7ce534d4cbc6372ff63bba5f234b2215e9cb400e9c5d3e88353611ca9e not found: ID does not exist" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.320847 4756 scope.go:117] "RemoveContainer" containerID="0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5" Feb 24 00:09:34 crc kubenswrapper[4756]: E0224 00:09:34.321497 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\": container with ID starting with 0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5 not found: ID does not exist" containerID="0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5" Feb 24 00:09:34 crc kubenswrapper[4756]: I0224 00:09:34.321588 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5"} err="failed to get container status \"0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\": rpc error: code = NotFound desc = could not find container \"0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5\": container with ID starting with 0c2243150723841ada5680ca01986831f4015b1201a7be3e21235c4a5c4bece5 not found: ID does not exist" Feb 24 00:09:36 crc kubenswrapper[4756]: E0224 00:09:36.418687 4756 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:36 crc kubenswrapper[4756]: I0224 00:09:36.419326 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:36 crc kubenswrapper[4756]: W0224 00:09:36.437632 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d48b22c58b6bc58762354ace00d16155024d60bfacc928a3190964e93a1e48ec WatchSource:0}: Error finding container d48b22c58b6bc58762354ace00d16155024d60bfacc928a3190964e93a1e48ec: Status 404 returned error can't find the container with id d48b22c58b6bc58762354ace00d16155024d60bfacc928a3190964e93a1e48ec Feb 24 00:09:37 crc kubenswrapper[4756]: I0224 00:09:37.216491 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9"} Feb 24 00:09:37 crc kubenswrapper[4756]: I0224 00:09:37.217039 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d48b22c58b6bc58762354ace00d16155024d60bfacc928a3190964e93a1e48ec"} Feb 24 00:09:37 crc kubenswrapper[4756]: E0224 00:09:37.217977 4756 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:09:37 crc kubenswrapper[4756]: I0224 00:09:37.218023 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:37 crc kubenswrapper[4756]: I0224 00:09:37.218558 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:37 crc kubenswrapper[4756]: E0224 00:09:37.877200 4756 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" volumeName="registry-storage" Feb 24 00:09:39 crc kubenswrapper[4756]: E0224 00:09:39.437968 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:39 crc kubenswrapper[4756]: E0224 00:09:39.438661 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:39 crc kubenswrapper[4756]: E0224 00:09:39.439407 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:39 crc kubenswrapper[4756]: E0224 00:09:39.439729 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:39 crc kubenswrapper[4756]: E0224 00:09:39.440054 4756 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:39 crc kubenswrapper[4756]: I0224 00:09:39.440122 4756 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 24 00:09:39 crc kubenswrapper[4756]: E0224 00:09:39.440388 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Feb 24 00:09:39 crc kubenswrapper[4756]: E0224 00:09:39.641979 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Feb 24 00:09:40 crc kubenswrapper[4756]: E0224 00:09:40.042567 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Feb 24 00:09:40 crc kubenswrapper[4756]: E0224 00:09:40.288742 4756 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/marketplace-operator-79b997595-r5ztj.1897062d8333df20\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{marketplace-operator-79b997595-r5ztj.1897062d8333df20 openshift-marketplace 29634 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:marketplace-operator-79b997595-r5ztj,UID:69b6689d-ae32-4f32-a088-588b657e42ce,APIVersion:v1,ResourceVersion:29597,FieldPath:spec.containers{marketplace-operator},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-24 00:09:30 +0000 UTC,LastTimestamp:2026-02-24 00:09:32.166335198 +0000 UTC m=+229.077197831,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 24 00:09:40 crc kubenswrapper[4756]: I0224 00:09:40.406550 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:40 crc kubenswrapper[4756]: I0224 00:09:40.406643 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:09:40 crc kubenswrapper[4756]: I0224 00:09:40.407351 4756 scope.go:117] "RemoveContainer" containerID="c6e78a2b70977ae84083ed0fa0a1f7bf367f5afae778cc1ca8c5e332541ff47a" Feb 24 00:09:40 crc kubenswrapper[4756]: E0224 00:09:40.407862 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-r5ztj_openshift-marketplace(69b6689d-ae32-4f32-a088-588b657e42ce)\"" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" Feb 24 00:09:40 crc kubenswrapper[4756]: E0224 00:09:40.843335 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Feb 24 00:09:42 crc kubenswrapper[4756]: E0224 00:09:42.444576 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Feb 24 00:09:43 crc kubenswrapper[4756]: I0224 00:09:43.836302 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:43 crc kubenswrapper[4756]: I0224 00:09:43.836730 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.270477 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.271592 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.271648 4756 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f" exitCode=1 Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.271690 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f"} Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.272136 4756 scope.go:117] "RemoveContainer" containerID="a0c32dfce39a5c44d7862eac37107216127840d61984b38fc23e26654246f33f" Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.273124 4756 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.273692 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.273931 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:44 crc kubenswrapper[4756]: I0224 00:09:44.407723 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.293154 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.294566 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.294660 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"21c0e1f1f2782763d6e5d63ded655864e005986b4b10c1d5447a1e710ca54206"} Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.296128 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.296770 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.297433 4756 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:45 crc kubenswrapper[4756]: E0224 00:09:45.646339 4756 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="6.4s" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.833571 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.834686 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.835664 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.836183 4756 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.852791 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.852845 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:45 crc kubenswrapper[4756]: E0224 00:09:45.853729 4756 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:45 crc kubenswrapper[4756]: I0224 00:09:45.854714 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:45 crc kubenswrapper[4756]: W0224 00:09:45.872839 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-bd56ba00947e6e96693726e1e415c6bfd58ed2e1987242a43f56c7a6bc390373 WatchSource:0}: Error finding container bd56ba00947e6e96693726e1e415c6bfd58ed2e1987242a43f56c7a6bc390373: Status 404 returned error can't find the container with id bd56ba00947e6e96693726e1e415c6bfd58ed2e1987242a43f56c7a6bc390373 Feb 24 00:09:46 crc kubenswrapper[4756]: I0224 00:09:46.302192 4756 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="93575566eb01032a75f87432235681e89bb5171684292fadbeb648619494d53d" exitCode=0 Feb 24 00:09:46 crc kubenswrapper[4756]: I0224 00:09:46.302299 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"93575566eb01032a75f87432235681e89bb5171684292fadbeb648619494d53d"} Feb 24 00:09:46 crc kubenswrapper[4756]: I0224 00:09:46.302409 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bd56ba00947e6e96693726e1e415c6bfd58ed2e1987242a43f56c7a6bc390373"} Feb 24 00:09:46 crc kubenswrapper[4756]: I0224 00:09:46.302970 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:46 crc kubenswrapper[4756]: I0224 00:09:46.302999 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:46 crc kubenswrapper[4756]: I0224 00:09:46.304042 4756 status_manager.go:851] "Failed to get status for pod" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-79b997595-r5ztj\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:46 crc kubenswrapper[4756]: E0224 00:09:46.304231 4756 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:46 crc kubenswrapper[4756]: I0224 00:09:46.304310 4756 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:46 crc kubenswrapper[4756]: I0224 00:09:46.304488 4756 status_manager.go:851] "Failed to get status for pod" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.132:6443: connect: connection refused" Feb 24 00:09:47 crc kubenswrapper[4756]: I0224 00:09:47.312420 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"725163aed81667c6b644e92e672efca64c4986442ec9993b26142929b0f914dd"} Feb 24 00:09:47 crc kubenswrapper[4756]: I0224 00:09:47.312804 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1985f359ebcc3767af31a50f4e7bf54cdf95a2c6732b13560474c81ba3d7a485"} Feb 24 00:09:47 crc kubenswrapper[4756]: I0224 00:09:47.312820 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5547ffd6c18af85228003ee649d630354df96385270a10995aa690b0d883a589"} Feb 24 00:09:47 crc kubenswrapper[4756]: I0224 00:09:47.312835 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8b020720021de2953d293d86e7f1112294e0a767e47afee064b79a26db07e824"} Feb 24 00:09:48 crc kubenswrapper[4756]: I0224 00:09:48.321959 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"24b4d9a20d3fe329870abc535d6da06a1e94835a86266b8fdc2da70e9c6d5f33"} Feb 24 00:09:48 crc kubenswrapper[4756]: I0224 00:09:48.322385 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:48 crc kubenswrapper[4756]: I0224 00:09:48.322413 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:48 crc kubenswrapper[4756]: I0224 00:09:48.322451 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:50 crc kubenswrapper[4756]: I0224 00:09:50.855215 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:50 crc kubenswrapper[4756]: I0224 00:09:50.855283 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:50 crc kubenswrapper[4756]: I0224 00:09:50.866904 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:50 crc kubenswrapper[4756]: I0224 00:09:50.972827 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:51 crc kubenswrapper[4756]: I0224 00:09:51.044602 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:09:51 crc kubenswrapper[4756]: I0224 00:09:51.044852 4756 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 24 00:09:51 crc kubenswrapper[4756]: I0224 00:09:51.045184 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 24 00:09:52 crc kubenswrapper[4756]: I0224 00:09:52.711001 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:09:52 crc kubenswrapper[4756]: I0224 00:09:52.711538 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:09:53 crc kubenswrapper[4756]: I0224 00:09:53.711416 4756 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:53 crc kubenswrapper[4756]: I0224 00:09:53.863870 4756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="30e3a13d-9dc6-4d59-b226-0c4bcb488190" Feb 24 00:09:54 crc kubenswrapper[4756]: I0224 00:09:54.361181 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:54 crc kubenswrapper[4756]: I0224 00:09:54.361894 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:54 crc kubenswrapper[4756]: I0224 00:09:54.365313 4756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="30e3a13d-9dc6-4d59-b226-0c4bcb488190" Feb 24 00:09:54 crc kubenswrapper[4756]: I0224 00:09:54.366102 4756 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://8b020720021de2953d293d86e7f1112294e0a767e47afee064b79a26db07e824" Feb 24 00:09:54 crc kubenswrapper[4756]: I0224 00:09:54.366226 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:09:55 crc kubenswrapper[4756]: I0224 00:09:55.366978 4756 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:55 crc kubenswrapper[4756]: I0224 00:09:55.367704 4756 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="12dc3a1f-81b2-4003-bbcd-b3664c283fcd" Feb 24 00:09:55 crc kubenswrapper[4756]: I0224 00:09:55.371839 4756 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="30e3a13d-9dc6-4d59-b226-0c4bcb488190" Feb 24 00:09:55 crc kubenswrapper[4756]: I0224 00:09:55.833474 4756 scope.go:117] "RemoveContainer" containerID="c6e78a2b70977ae84083ed0fa0a1f7bf367f5afae778cc1ca8c5e332541ff47a" Feb 24 00:09:56 crc kubenswrapper[4756]: I0224 00:09:56.374123 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/2.log" Feb 24 00:09:56 crc kubenswrapper[4756]: I0224 00:09:56.375612 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/1.log" Feb 24 00:09:56 crc kubenswrapper[4756]: I0224 00:09:56.375756 4756 generic.go:334] "Generic (PLEG): container finished" podID="69b6689d-ae32-4f32-a088-588b657e42ce" containerID="5105bd38d0de3bb68b4ec7c2b2a5fd342dd2b07839e683b78fe1d2a30aa44691" exitCode=1 Feb 24 00:09:56 crc kubenswrapper[4756]: I0224 00:09:56.375837 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" event={"ID":"69b6689d-ae32-4f32-a088-588b657e42ce","Type":"ContainerDied","Data":"5105bd38d0de3bb68b4ec7c2b2a5fd342dd2b07839e683b78fe1d2a30aa44691"} Feb 24 00:09:56 crc kubenswrapper[4756]: I0224 00:09:56.375993 4756 scope.go:117] "RemoveContainer" containerID="c6e78a2b70977ae84083ed0fa0a1f7bf367f5afae778cc1ca8c5e332541ff47a" Feb 24 00:09:56 crc kubenswrapper[4756]: I0224 00:09:56.376700 4756 scope.go:117] "RemoveContainer" containerID="5105bd38d0de3bb68b4ec7c2b2a5fd342dd2b07839e683b78fe1d2a30aa44691" Feb 24 00:09:56 crc kubenswrapper[4756]: E0224 00:09:56.377385 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-r5ztj_openshift-marketplace(69b6689d-ae32-4f32-a088-588b657e42ce)\"" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" Feb 24 00:09:57 crc kubenswrapper[4756]: I0224 00:09:57.386224 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/2.log" Feb 24 00:10:00 crc kubenswrapper[4756]: I0224 00:10:00.406140 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:10:00 crc kubenswrapper[4756]: I0224 00:10:00.406985 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:10:00 crc kubenswrapper[4756]: I0224 00:10:00.407889 4756 scope.go:117] "RemoveContainer" containerID="5105bd38d0de3bb68b4ec7c2b2a5fd342dd2b07839e683b78fe1d2a30aa44691" Feb 24 00:10:00 crc kubenswrapper[4756]: E0224 00:10:00.408576 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-r5ztj_openshift-marketplace(69b6689d-ae32-4f32-a088-588b657e42ce)\"" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" Feb 24 00:10:01 crc kubenswrapper[4756]: I0224 00:10:01.049907 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:10:01 crc kubenswrapper[4756]: I0224 00:10:01.055959 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 24 00:10:02 crc kubenswrapper[4756]: I0224 00:10:02.953484 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 24 00:10:03 crc kubenswrapper[4756]: I0224 00:10:03.171609 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 24 00:10:03 crc kubenswrapper[4756]: I0224 00:10:03.426690 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 24 00:10:03 crc kubenswrapper[4756]: I0224 00:10:03.964395 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 24 00:10:04 crc kubenswrapper[4756]: I0224 00:10:04.204690 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 24 00:10:04 crc kubenswrapper[4756]: I0224 00:10:04.673466 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 24 00:10:04 crc kubenswrapper[4756]: I0224 00:10:04.849494 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 24 00:10:05 crc kubenswrapper[4756]: I0224 00:10:05.072136 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 24 00:10:05 crc kubenswrapper[4756]: I0224 00:10:05.296711 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 24 00:10:05 crc kubenswrapper[4756]: I0224 00:10:05.302306 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 24 00:10:05 crc kubenswrapper[4756]: I0224 00:10:05.337161 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 24 00:10:05 crc kubenswrapper[4756]: I0224 00:10:05.733486 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.160456 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.327511 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.334515 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.415368 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.492330 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.750275 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.785470 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.795141 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 24 00:10:06 crc kubenswrapper[4756]: I0224 00:10:06.841180 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.005447 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.040605 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.051640 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.051661 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.084493 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.117647 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.173589 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.199608 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.382441 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.465806 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.660430 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.788808 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.833993 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.852115 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.872696 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:10:07 crc kubenswrapper[4756]: I0224 00:10:07.941936 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.023949 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.032041 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.059192 4756 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.067477 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.131878 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.133824 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.149482 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.236130 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.327464 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.339921 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.398315 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.406047 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.523002 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.545680 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.578862 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.581510 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.584091 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.599725 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.659995 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.694514 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.744792 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.871629 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.901806 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.927743 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 24 00:10:08 crc kubenswrapper[4756]: I0224 00:10:08.966588 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.021401 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.021481 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.157436 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.252865 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.328961 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.348572 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.352433 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.368489 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.468536 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.561266 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 24 00:10:09 crc kubenswrapper[4756]: I0224 00:10:09.741397 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.063880 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.065101 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.175236 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.182800 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.366703 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.459483 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.532100 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.551150 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.637428 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.687855 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.709400 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.723443 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.732869 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.739258 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.946289 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 24 00:10:10 crc kubenswrapper[4756]: I0224 00:10:10.952297 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.001869 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.027860 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.028406 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.047857 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.140131 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.153175 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.209515 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.267027 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.279912 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.333337 4756 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.385075 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.397725 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.462208 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.502630 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.513072 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.661877 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.754223 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.837039 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.862898 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.913310 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.917548 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 24 00:10:11 crc kubenswrapper[4756]: I0224 00:10:11.992286 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.069582 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.098451 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.157340 4756 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.163765 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.163890 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.175055 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.189444 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.191074 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.195559 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.195517451 podStartE2EDuration="19.195517451s" podCreationTimestamp="2026-02-24 00:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:12.188336329 +0000 UTC m=+269.099198992" watchObservedRunningTime="2026-02-24 00:10:12.195517451 +0000 UTC m=+269.106380074" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.278877 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.305857 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.323785 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.389099 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.390799 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.509666 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.527229 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.606541 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.630022 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.677440 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.800362 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.811578 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.840997 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.910160 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.933938 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.957577 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.983130 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 24 00:10:12 crc kubenswrapper[4756]: I0224 00:10:12.988547 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.022202 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.024211 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.099841 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.134774 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.217929 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.221118 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.222330 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.278709 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.279138 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.387413 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.392335 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.447564 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.450042 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.566967 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.588808 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.643447 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.656512 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.689443 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.749260 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.750880 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 24 00:10:13 crc kubenswrapper[4756]: I0224 00:10:13.889345 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.025506 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.115165 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.237834 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.244617 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.286776 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.304852 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.344832 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.414604 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.414604 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.460947 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.492171 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.502702 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.531264 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.620096 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.688399 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.758955 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.833291 4756 scope.go:117] "RemoveContainer" containerID="5105bd38d0de3bb68b4ec7c2b2a5fd342dd2b07839e683b78fe1d2a30aa44691" Feb 24 00:10:14 crc kubenswrapper[4756]: E0224 00:10:14.833659 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-r5ztj_openshift-marketplace(69b6689d-ae32-4f32-a088-588b657e42ce)\"" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" podUID="69b6689d-ae32-4f32-a088-588b657e42ce" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.915710 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.945030 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.947337 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 24 00:10:14 crc kubenswrapper[4756]: I0224 00:10:14.965641 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.011690 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.013622 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.043352 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.043481 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.078526 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.122772 4756 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.204505 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.327414 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.339198 4756 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.408878 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.449732 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.527787 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.561313 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.578155 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.586469 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.649032 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.658758 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.696323 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.779131 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.871578 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.897319 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:10:15 crc kubenswrapper[4756]: I0224 00:10:15.994238 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.125953 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.167089 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.171228 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.195509 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.316257 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.377901 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.397168 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.414861 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.459314 4756 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.460105 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9" gracePeriod=5 Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.479865 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.495642 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.735129 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.855053 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.858807 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.883267 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.895316 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 24 00:10:16 crc kubenswrapper[4756]: I0224 00:10:16.991305 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.027862 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.037977 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.039577 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.052332 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.251452 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.340694 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.342381 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.365133 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.365272 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.403488 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.501692 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.639502 4756 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.700465 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.739956 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.860895 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 24 00:10:17 crc kubenswrapper[4756]: I0224 00:10:17.946001 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 24 00:10:18 crc kubenswrapper[4756]: I0224 00:10:18.148772 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 24 00:10:18 crc kubenswrapper[4756]: I0224 00:10:18.355129 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 24 00:10:18 crc kubenswrapper[4756]: I0224 00:10:18.417146 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 24 00:10:18 crc kubenswrapper[4756]: I0224 00:10:18.474868 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 24 00:10:18 crc kubenswrapper[4756]: I0224 00:10:18.843308 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 24 00:10:18 crc kubenswrapper[4756]: I0224 00:10:18.868802 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:10:18 crc kubenswrapper[4756]: I0224 00:10:18.926670 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 24 00:10:18 crc kubenswrapper[4756]: I0224 00:10:18.961668 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 24 00:10:19 crc kubenswrapper[4756]: I0224 00:10:19.031807 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 24 00:10:19 crc kubenswrapper[4756]: I0224 00:10:19.047967 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 24 00:10:19 crc kubenswrapper[4756]: I0224 00:10:19.061019 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 24 00:10:19 crc kubenswrapper[4756]: I0224 00:10:19.093991 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 24 00:10:19 crc kubenswrapper[4756]: I0224 00:10:19.232550 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 24 00:10:19 crc kubenswrapper[4756]: I0224 00:10:19.449592 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 24 00:10:19 crc kubenswrapper[4756]: I0224 00:10:19.964737 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 24 00:10:20 crc kubenswrapper[4756]: I0224 00:10:20.038292 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 24 00:10:20 crc kubenswrapper[4756]: I0224 00:10:20.047721 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 24 00:10:20 crc kubenswrapper[4756]: I0224 00:10:20.597810 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 24 00:10:20 crc kubenswrapper[4756]: I0224 00:10:20.726426 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.052140 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.052336 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075154 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075214 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075284 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075308 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075328 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075484 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075523 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075567 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.075624 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.088930 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.176608 4756 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.176665 4756 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.176678 4756 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.176687 4756 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.176698 4756 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.532321 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.532389 4756 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9" exitCode=137 Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.532443 4756 scope.go:117] "RemoveContainer" containerID="adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.532503 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.564012 4756 scope.go:117] "RemoveContainer" containerID="adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9" Feb 24 00:10:22 crc kubenswrapper[4756]: E0224 00:10:22.564651 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9\": container with ID starting with adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9 not found: ID does not exist" containerID="adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.564688 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9"} err="failed to get container status \"adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9\": rpc error: code = NotFound desc = could not find container \"adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9\": container with ID starting with adb554416489d654bf682583a6525f5071ef7ff774ab885aee589cca6eaaf5e9 not found: ID does not exist" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.711608 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.711708 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.711769 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.712575 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c"} pod="openshift-machine-config-operator/machine-config-daemon-qb88h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:10:22 crc kubenswrapper[4756]: I0224 00:10:22.712643 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" containerID="cri-o://1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c" gracePeriod=600 Feb 24 00:10:23 crc kubenswrapper[4756]: I0224 00:10:23.540207 4756 generic.go:334] "Generic (PLEG): container finished" podID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerID="1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c" exitCode=0 Feb 24 00:10:23 crc kubenswrapper[4756]: I0224 00:10:23.540301 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerDied","Data":"1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c"} Feb 24 00:10:23 crc kubenswrapper[4756]: I0224 00:10:23.540877 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerStarted","Data":"3462137efdd08477626614f3719291731120c43a9269032f0fe7f282c877172c"} Feb 24 00:10:23 crc kubenswrapper[4756]: I0224 00:10:23.843698 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 24 00:10:28 crc kubenswrapper[4756]: I0224 00:10:28.834442 4756 scope.go:117] "RemoveContainer" containerID="5105bd38d0de3bb68b4ec7c2b2a5fd342dd2b07839e683b78fe1d2a30aa44691" Feb 24 00:10:29 crc kubenswrapper[4756]: I0224 00:10:29.580579 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/2.log" Feb 24 00:10:29 crc kubenswrapper[4756]: I0224 00:10:29.580963 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" event={"ID":"69b6689d-ae32-4f32-a088-588b657e42ce","Type":"ContainerStarted","Data":"1aba71cfb882b069f717a1ea44e04a5ccaa575cf769ca9a25c774d8ba2033f98"} Feb 24 00:10:29 crc kubenswrapper[4756]: I0224 00:10:29.581331 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:10:29 crc kubenswrapper[4756]: I0224 00:10:29.585798 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-r5ztj" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.915171 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-44tfc"] Feb 24 00:10:40 crc kubenswrapper[4756]: E0224 00:10:40.916400 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" containerName="installer" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.916420 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" containerName="installer" Feb 24 00:10:40 crc kubenswrapper[4756]: E0224 00:10:40.916453 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.916463 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.916621 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.916638 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f14689d-861e-4ded-a7b3-d250eec9093e" containerName="installer" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.917669 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.920894 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.930990 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44tfc"] Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.957403 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6plcz\" (UniqueName: \"kubernetes.io/projected/31e6e2f7-8e2a-4013-89c2-f6520a649a87-kube-api-access-6plcz\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.957621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e6e2f7-8e2a-4013-89c2-f6520a649a87-catalog-content\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:40 crc kubenswrapper[4756]: I0224 00:10:40.957781 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e6e2f7-8e2a-4013-89c2-f6520a649a87-utilities\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.059188 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e6e2f7-8e2a-4013-89c2-f6520a649a87-utilities\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.059309 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6plcz\" (UniqueName: \"kubernetes.io/projected/31e6e2f7-8e2a-4013-89c2-f6520a649a87-kube-api-access-6plcz\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.059441 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e6e2f7-8e2a-4013-89c2-f6520a649a87-catalog-content\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.059947 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31e6e2f7-8e2a-4013-89c2-f6520a649a87-utilities\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.060302 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31e6e2f7-8e2a-4013-89c2-f6520a649a87-catalog-content\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.086151 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6plcz\" (UniqueName: \"kubernetes.io/projected/31e6e2f7-8e2a-4013-89c2-f6520a649a87-kube-api-access-6plcz\") pod \"community-operators-44tfc\" (UID: \"31e6e2f7-8e2a-4013-89c2-f6520a649a87\") " pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.120400 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tjbbr"] Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.122567 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.125424 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.131853 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjbbr"] Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.160442 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-utilities\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.160568 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5g4b\" (UniqueName: \"kubernetes.io/projected/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-kube-api-access-w5g4b\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.160620 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-catalog-content\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.252530 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.261377 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-catalog-content\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.261671 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-utilities\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.261881 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5g4b\" (UniqueName: \"kubernetes.io/projected/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-kube-api-access-w5g4b\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.261895 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-catalog-content\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.262300 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-utilities\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.287378 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5g4b\" (UniqueName: \"kubernetes.io/projected/b1c23d9c-138c-4f2d-8e1b-10bf199f3c65-kube-api-access-w5g4b\") pod \"certified-operators-tjbbr\" (UID: \"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65\") " pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.448126 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.520413 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-44tfc"] Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.666572 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjbbr"] Feb 24 00:10:41 crc kubenswrapper[4756]: I0224 00:10:41.669500 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44tfc" event={"ID":"31e6e2f7-8e2a-4013-89c2-f6520a649a87","Type":"ContainerStarted","Data":"24c1a6ae822ba801328d1c66d7baedfb56d99a6789399fe64bce69b2774a7cdf"} Feb 24 00:10:41 crc kubenswrapper[4756]: W0224 00:10:41.672730 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1c23d9c_138c_4f2d_8e1b_10bf199f3c65.slice/crio-05e03ee28d8a84404ba9a2173b08d03b87fdc717c554d69920a4a871196a7942 WatchSource:0}: Error finding container 05e03ee28d8a84404ba9a2173b08d03b87fdc717c554d69920a4a871196a7942: Status 404 returned error can't find the container with id 05e03ee28d8a84404ba9a2173b08d03b87fdc717c554d69920a4a871196a7942 Feb 24 00:10:42 crc kubenswrapper[4756]: I0224 00:10:42.679565 4756 generic.go:334] "Generic (PLEG): container finished" podID="31e6e2f7-8e2a-4013-89c2-f6520a649a87" containerID="e6c80fde79f03c62ac4e393a8652fd7c7ddc39a9b5817103c1132d10bb76957d" exitCode=0 Feb 24 00:10:42 crc kubenswrapper[4756]: I0224 00:10:42.679673 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44tfc" event={"ID":"31e6e2f7-8e2a-4013-89c2-f6520a649a87","Type":"ContainerDied","Data":"e6c80fde79f03c62ac4e393a8652fd7c7ddc39a9b5817103c1132d10bb76957d"} Feb 24 00:10:42 crc kubenswrapper[4756]: I0224 00:10:42.684390 4756 generic.go:334] "Generic (PLEG): container finished" podID="b1c23d9c-138c-4f2d-8e1b-10bf199f3c65" containerID="e23e938f9a73a624aec062ee034e6bfec94a9b76d644edf7f30cdc2c7146e0ea" exitCode=0 Feb 24 00:10:42 crc kubenswrapper[4756]: I0224 00:10:42.684446 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjbbr" event={"ID":"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65","Type":"ContainerDied","Data":"e23e938f9a73a624aec062ee034e6bfec94a9b76d644edf7f30cdc2c7146e0ea"} Feb 24 00:10:42 crc kubenswrapper[4756]: I0224 00:10:42.684483 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjbbr" event={"ID":"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65","Type":"ContainerStarted","Data":"05e03ee28d8a84404ba9a2173b08d03b87fdc717c554d69920a4a871196a7942"} Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.319783 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dz9sx"] Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.322604 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.332727 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.337376 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dz9sx"] Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.494706 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-utilities\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.495207 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-catalog-content\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.495290 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8c2\" (UniqueName: \"kubernetes.io/projected/7b06bc00-ef2e-451a-b07d-da301f20df31-kube-api-access-4t8c2\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.531205 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ddpsx"] Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.543692 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.549811 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.580408 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddpsx"] Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.598365 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-utilities\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.598530 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-catalog-content\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.598610 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t8c2\" (UniqueName: \"kubernetes.io/projected/7b06bc00-ef2e-451a-b07d-da301f20df31-kube-api-access-4t8c2\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.599278 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-utilities\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.599670 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-catalog-content\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.603631 4756 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.640132 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t8c2\" (UniqueName: \"kubernetes.io/projected/7b06bc00-ef2e-451a-b07d-da301f20df31-kube-api-access-4t8c2\") pod \"redhat-marketplace-dz9sx\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.649206 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.697441 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjbbr" event={"ID":"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65","Type":"ContainerStarted","Data":"4c356e360f084b2ff2a0fca27624582a4c8c2fda22d9e655cf9e5b3f724a6392"} Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.699843 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf07722d-ecdf-4f68-8074-fac31ce286a5-catalog-content\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.699972 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf07722d-ecdf-4f68-8074-fac31ce286a5-utilities\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.700019 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p4d8\" (UniqueName: \"kubernetes.io/projected/bf07722d-ecdf-4f68-8074-fac31ce286a5-kube-api-access-7p4d8\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.702255 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44tfc" event={"ID":"31e6e2f7-8e2a-4013-89c2-f6520a649a87","Type":"ContainerStarted","Data":"81198ccf5ba33f51b44f445942792fac1ae2e947f22a093db203e498f68f8bd5"} Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.801883 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf07722d-ecdf-4f68-8074-fac31ce286a5-utilities\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.802328 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p4d8\" (UniqueName: \"kubernetes.io/projected/bf07722d-ecdf-4f68-8074-fac31ce286a5-kube-api-access-7p4d8\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.802393 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf07722d-ecdf-4f68-8074-fac31ce286a5-catalog-content\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.802688 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf07722d-ecdf-4f68-8074-fac31ce286a5-utilities\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.803572 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf07722d-ecdf-4f68-8074-fac31ce286a5-catalog-content\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.822443 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p4d8\" (UniqueName: \"kubernetes.io/projected/bf07722d-ecdf-4f68-8074-fac31ce286a5-kube-api-access-7p4d8\") pod \"redhat-operators-ddpsx\" (UID: \"bf07722d-ecdf-4f68-8074-fac31ce286a5\") " pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.910537 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.915214 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.943404 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b57df8f7-vrtxl"] Feb 24 00:10:43 crc kubenswrapper[4756]: I0224 00:10:43.943634 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" podUID="bab5942a-cef0-47a2-bf68-dca1c8ac14fa" containerName="controller-manager" containerID="cri-o://72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807" gracePeriod=30 Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.032844 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg"] Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.033386 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" podUID="2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" containerName="route-controller-manager" containerID="cri-o://af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4" gracePeriod=30 Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.100758 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dz9sx"] Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.239001 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ddpsx"] Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.485914 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.500782 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.634967 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vthd\" (UniqueName: \"kubernetes.io/projected/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-kube-api-access-9vthd\") pod \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.635013 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-config\") pod \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.635078 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dfc8\" (UniqueName: \"kubernetes.io/projected/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-kube-api-access-7dfc8\") pod \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.635135 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-serving-cert\") pod \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.635165 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-client-ca\") pod \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.635211 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-client-ca\") pod \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.635277 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-config\") pod \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.635308 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-proxy-ca-bundles\") pod \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\" (UID: \"bab5942a-cef0-47a2-bf68-dca1c8ac14fa\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.635334 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-serving-cert\") pod \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\" (UID: \"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1\") " Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.636617 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-client-ca" (OuterVolumeSpecName: "client-ca") pod "bab5942a-cef0-47a2-bf68-dca1c8ac14fa" (UID: "bab5942a-cef0-47a2-bf68-dca1c8ac14fa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.636642 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bab5942a-cef0-47a2-bf68-dca1c8ac14fa" (UID: "bab5942a-cef0-47a2-bf68-dca1c8ac14fa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.636763 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-config" (OuterVolumeSpecName: "config") pod "bab5942a-cef0-47a2-bf68-dca1c8ac14fa" (UID: "bab5942a-cef0-47a2-bf68-dca1c8ac14fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.637038 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" (UID: "2a55092a-0ec5-4dfe-b3d8-f636dd648ff1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.637128 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-config" (OuterVolumeSpecName: "config") pod "2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" (UID: "2a55092a-0ec5-4dfe-b3d8-f636dd648ff1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.641713 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-kube-api-access-9vthd" (OuterVolumeSpecName: "kube-api-access-9vthd") pod "bab5942a-cef0-47a2-bf68-dca1c8ac14fa" (UID: "bab5942a-cef0-47a2-bf68-dca1c8ac14fa"). InnerVolumeSpecName "kube-api-access-9vthd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.642105 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bab5942a-cef0-47a2-bf68-dca1c8ac14fa" (UID: "bab5942a-cef0-47a2-bf68-dca1c8ac14fa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.642242 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" (UID: "2a55092a-0ec5-4dfe-b3d8-f636dd648ff1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.644076 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-kube-api-access-7dfc8" (OuterVolumeSpecName: "kube-api-access-7dfc8") pod "2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" (UID: "2a55092a-0ec5-4dfe-b3d8-f636dd648ff1"). InnerVolumeSpecName "kube-api-access-7dfc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.709952 4756 generic.go:334] "Generic (PLEG): container finished" podID="2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" containerID="af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4" exitCode=0 Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.710036 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" event={"ID":"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1","Type":"ContainerDied","Data":"af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.710108 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" event={"ID":"2a55092a-0ec5-4dfe-b3d8-f636dd648ff1","Type":"ContainerDied","Data":"500b971cd179cdc596d0923b788b3917c270e6abb99043615aa747cca39cdfaa"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.710129 4756 scope.go:117] "RemoveContainer" containerID="af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.710014 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.713906 4756 generic.go:334] "Generic (PLEG): container finished" podID="bf07722d-ecdf-4f68-8074-fac31ce286a5" containerID="7c89cfa8477d5041321bbaf638e18ff42b78e44d84495d95d1a14d3065ce6d1e" exitCode=0 Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.714200 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpsx" event={"ID":"bf07722d-ecdf-4f68-8074-fac31ce286a5","Type":"ContainerDied","Data":"7c89cfa8477d5041321bbaf638e18ff42b78e44d84495d95d1a14d3065ce6d1e"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.714234 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpsx" event={"ID":"bf07722d-ecdf-4f68-8074-fac31ce286a5","Type":"ContainerStarted","Data":"45c98db6adfda9a87bf3578a9310b4718e2712b5ad720b244b8601c39f16be09"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.715668 4756 generic.go:334] "Generic (PLEG): container finished" podID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerID="5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67" exitCode=0 Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.715725 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dz9sx" event={"ID":"7b06bc00-ef2e-451a-b07d-da301f20df31","Type":"ContainerDied","Data":"5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.715745 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dz9sx" event={"ID":"7b06bc00-ef2e-451a-b07d-da301f20df31","Type":"ContainerStarted","Data":"aa496286fa903fea56c066fc7c3d26e2a68f90ba69139b4837f85e32b6faeb97"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.720516 4756 generic.go:334] "Generic (PLEG): container finished" podID="b1c23d9c-138c-4f2d-8e1b-10bf199f3c65" containerID="4c356e360f084b2ff2a0fca27624582a4c8c2fda22d9e655cf9e5b3f724a6392" exitCode=0 Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.720587 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjbbr" event={"ID":"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65","Type":"ContainerDied","Data":"4c356e360f084b2ff2a0fca27624582a4c8c2fda22d9e655cf9e5b3f724a6392"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.724042 4756 generic.go:334] "Generic (PLEG): container finished" podID="31e6e2f7-8e2a-4013-89c2-f6520a649a87" containerID="81198ccf5ba33f51b44f445942792fac1ae2e947f22a093db203e498f68f8bd5" exitCode=0 Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.724180 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44tfc" event={"ID":"31e6e2f7-8e2a-4013-89c2-f6520a649a87","Type":"ContainerDied","Data":"81198ccf5ba33f51b44f445942792fac1ae2e947f22a093db203e498f68f8bd5"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.728362 4756 scope.go:117] "RemoveContainer" containerID="af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.728594 4756 generic.go:334] "Generic (PLEG): container finished" podID="bab5942a-cef0-47a2-bf68-dca1c8ac14fa" containerID="72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807" exitCode=0 Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.728642 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.728661 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" event={"ID":"bab5942a-cef0-47a2-bf68-dca1c8ac14fa","Type":"ContainerDied","Data":"72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807"} Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.728697 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b57df8f7-vrtxl" event={"ID":"bab5942a-cef0-47a2-bf68-dca1c8ac14fa","Type":"ContainerDied","Data":"60b2f316c3ba9d650b8c6ca0b4a774df42197de177567dd4eb0c1eb768e232bd"} Feb 24 00:10:44 crc kubenswrapper[4756]: E0224 00:10:44.731124 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4\": container with ID starting with af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4 not found: ID does not exist" containerID="af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.731172 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4"} err="failed to get container status \"af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4\": rpc error: code = NotFound desc = could not find container \"af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4\": container with ID starting with af671af9479a61d38736197c4a70e57ff32efc6550df0d6219b59421150496f4 not found: ID does not exist" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.731205 4756 scope.go:117] "RemoveContainer" containerID="72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.741416 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.741470 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vthd\" (UniqueName: \"kubernetes.io/projected/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-kube-api-access-9vthd\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.741491 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.741513 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dfc8\" (UniqueName: \"kubernetes.io/projected/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-kube-api-access-7dfc8\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.741527 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.741539 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.741650 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.742310 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.742345 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bab5942a-cef0-47a2-bf68-dca1c8ac14fa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.764118 4756 scope.go:117] "RemoveContainer" containerID="72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807" Feb 24 00:10:44 crc kubenswrapper[4756]: E0224 00:10:44.766227 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807\": container with ID starting with 72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807 not found: ID does not exist" containerID="72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.766416 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807"} err="failed to get container status \"72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807\": rpc error: code = NotFound desc = could not find container \"72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807\": container with ID starting with 72f861ab3c5c4f99bb63fde0b43fbd676e63307e3dda878522bb578a4c187807 not found: ID does not exist" Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.775756 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg"] Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.779623 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8654ff864-lwgxg"] Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.822794 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b57df8f7-vrtxl"] Feb 24 00:10:44 crc kubenswrapper[4756]: I0224 00:10:44.826170 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b57df8f7-vrtxl"] Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.134044 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw"] Feb 24 00:10:45 crc kubenswrapper[4756]: E0224 00:10:45.134584 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab5942a-cef0-47a2-bf68-dca1c8ac14fa" containerName="controller-manager" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.134643 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab5942a-cef0-47a2-bf68-dca1c8ac14fa" containerName="controller-manager" Feb 24 00:10:45 crc kubenswrapper[4756]: E0224 00:10:45.134666 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" containerName="route-controller-manager" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.134704 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" containerName="route-controller-manager" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.134872 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" containerName="route-controller-manager" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.134891 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab5942a-cef0-47a2-bf68-dca1c8ac14fa" containerName="controller-manager" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.135739 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.138773 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-869cb74f4b-l5ldw"] Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.139611 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.139761 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.139774 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.140401 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.140589 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.140441 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.141578 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.142642 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.147736 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.157624 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.157821 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.157644 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.159969 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.162933 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.167660 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-869cb74f4b-l5ldw"] Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.176239 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw"] Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.250950 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-config\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.251014 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-config\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.251042 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-serving-cert\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.251085 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kngfn\" (UniqueName: \"kubernetes.io/projected/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-kube-api-access-kngfn\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.251128 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-client-ca\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.251169 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-proxy-ca-bundles\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.251202 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-serving-cert\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.251227 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-client-ca\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.251260 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8q44\" (UniqueName: \"kubernetes.io/projected/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-kube-api-access-p8q44\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352050 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-serving-cert\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352566 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kngfn\" (UniqueName: \"kubernetes.io/projected/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-kube-api-access-kngfn\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352612 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-client-ca\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352656 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-proxy-ca-bundles\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352700 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-serving-cert\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352734 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-client-ca\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352768 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8q44\" (UniqueName: \"kubernetes.io/projected/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-kube-api-access-p8q44\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352816 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-config\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.352844 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-config\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.354028 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-client-ca\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.354114 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-proxy-ca-bundles\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.354423 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-client-ca\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.354478 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-config\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.356039 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-config\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.364159 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-serving-cert\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.367999 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-serving-cert\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.378250 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8q44\" (UniqueName: \"kubernetes.io/projected/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-kube-api-access-p8q44\") pod \"route-controller-manager-7968c8c54-zsgvw\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.386433 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kngfn\" (UniqueName: \"kubernetes.io/projected/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-kube-api-access-kngfn\") pod \"controller-manager-869cb74f4b-l5ldw\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.521234 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.531566 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.746322 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dz9sx" event={"ID":"7b06bc00-ef2e-451a-b07d-da301f20df31","Type":"ContainerStarted","Data":"8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e"} Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.749650 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjbbr" event={"ID":"b1c23d9c-138c-4f2d-8e1b-10bf199f3c65","Type":"ContainerStarted","Data":"56619daf927b7973e5e0eef999a84942aa6b5b4b0eb434dca1d4348e8d592c18"} Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.756473 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-44tfc" event={"ID":"31e6e2f7-8e2a-4013-89c2-f6520a649a87","Type":"ContainerStarted","Data":"ec3eee7863bf3f7758f4f764eca43de514f0c68d0a0be85163ef3c327cb38eb9"} Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.804968 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tjbbr" podStartSLOduration=2.2810193659999998 podStartE2EDuration="4.804948666s" podCreationTimestamp="2026-02-24 00:10:41 +0000 UTC" firstStartedPulling="2026-02-24 00:10:42.686219353 +0000 UTC m=+299.597081986" lastFinishedPulling="2026-02-24 00:10:45.210148643 +0000 UTC m=+302.121011286" observedRunningTime="2026-02-24 00:10:45.802023551 +0000 UTC m=+302.712886204" watchObservedRunningTime="2026-02-24 00:10:45.804948666 +0000 UTC m=+302.715811309" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.806047 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw"] Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.832222 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-44tfc" podStartSLOduration=3.148902416 podStartE2EDuration="5.832195647s" podCreationTimestamp="2026-02-24 00:10:40 +0000 UTC" firstStartedPulling="2026-02-24 00:10:42.682021387 +0000 UTC m=+299.592884060" lastFinishedPulling="2026-02-24 00:10:45.365314658 +0000 UTC m=+302.276177291" observedRunningTime="2026-02-24 00:10:45.82838305 +0000 UTC m=+302.739245693" watchObservedRunningTime="2026-02-24 00:10:45.832195647 +0000 UTC m=+302.743058280" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.878035 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a55092a-0ec5-4dfe-b3d8-f636dd648ff1" path="/var/lib/kubelet/pods/2a55092a-0ec5-4dfe-b3d8-f636dd648ff1/volumes" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.879716 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab5942a-cef0-47a2-bf68-dca1c8ac14fa" path="/var/lib/kubelet/pods/bab5942a-cef0-47a2-bf68-dca1c8ac14fa/volumes" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.908804 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z6g4k"] Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.910521 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.935010 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6g4k"] Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.945532 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-869cb74f4b-l5ldw"] Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.979729 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a552cc6-869f-4b5c-a95a-25892b560fa4-utilities\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.979806 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a552cc6-869f-4b5c-a95a-25892b560fa4-catalog-content\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:45 crc kubenswrapper[4756]: I0224 00:10:45.980087 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2tz\" (UniqueName: \"kubernetes.io/projected/4a552cc6-869f-4b5c-a95a-25892b560fa4-kube-api-access-2c2tz\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.024833 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw"] Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.081486 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2tz\" (UniqueName: \"kubernetes.io/projected/4a552cc6-869f-4b5c-a95a-25892b560fa4-kube-api-access-2c2tz\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.081588 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a552cc6-869f-4b5c-a95a-25892b560fa4-utilities\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.081616 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a552cc6-869f-4b5c-a95a-25892b560fa4-catalog-content\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.082537 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4a552cc6-869f-4b5c-a95a-25892b560fa4-catalog-content\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.083500 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4a552cc6-869f-4b5c-a95a-25892b560fa4-utilities\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.086888 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-869cb74f4b-l5ldw"] Feb 24 00:10:46 crc kubenswrapper[4756]: W0224 00:10:46.094253 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05004c3c_b1c5_418a_8a50_f4ec5e15cf3f.slice/crio-1c1f3d551b29dc2b676b80e9255ca4e0851240441cbb494444e9e6221ecbf7d2 WatchSource:0}: Error finding container 1c1f3d551b29dc2b676b80e9255ca4e0851240441cbb494444e9e6221ecbf7d2: Status 404 returned error can't find the container with id 1c1f3d551b29dc2b676b80e9255ca4e0851240441cbb494444e9e6221ecbf7d2 Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.121870 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fq6rf"] Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.123315 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.127437 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2tz\" (UniqueName: \"kubernetes.io/projected/4a552cc6-869f-4b5c-a95a-25892b560fa4-kube-api-access-2c2tz\") pod \"certified-operators-z6g4k\" (UID: \"4a552cc6-869f-4b5c-a95a-25892b560fa4\") " pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.144534 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fq6rf"] Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.183787 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-catalog-content\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.183848 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8w56\" (UniqueName: \"kubernetes.io/projected/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-kube-api-access-b8w56\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.183902 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-utilities\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.249536 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.285778 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-catalog-content\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.285829 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8w56\" (UniqueName: \"kubernetes.io/projected/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-kube-api-access-b8w56\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.285918 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-utilities\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.286419 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-catalog-content\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.286541 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-utilities\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.305360 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8w56\" (UniqueName: \"kubernetes.io/projected/167c0e0e-ba56-4452-aed6-fd2857f9f3c7-kube-api-access-b8w56\") pod \"community-operators-fq6rf\" (UID: \"167c0e0e-ba56-4452-aed6-fd2857f9f3c7\") " pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.454699 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.598982 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6g4k"] Feb 24 00:10:46 crc kubenswrapper[4756]: W0224 00:10:46.610876 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a552cc6_869f_4b5c_a95a_25892b560fa4.slice/crio-1a67a5607d9953e3f7f552a35b45bdfd00230c64c83e55141717e31e7e103717 WatchSource:0}: Error finding container 1a67a5607d9953e3f7f552a35b45bdfd00230c64c83e55141717e31e7e103717: Status 404 returned error can't find the container with id 1a67a5607d9953e3f7f552a35b45bdfd00230c64c83e55141717e31e7e103717 Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.797267 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpsx" event={"ID":"bf07722d-ecdf-4f68-8074-fac31ce286a5","Type":"ContainerStarted","Data":"d2bfe8427e125bd9b4a638df1c7d6cff3fd602c71620ae8cca1f059040713424"} Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.827075 4756 generic.go:334] "Generic (PLEG): container finished" podID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerID="8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e" exitCode=0 Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.827281 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dz9sx" event={"ID":"7b06bc00-ef2e-451a-b07d-da301f20df31","Type":"ContainerDied","Data":"8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e"} Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.834274 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" event={"ID":"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f","Type":"ContainerStarted","Data":"f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775"} Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.834323 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" event={"ID":"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f","Type":"ContainerStarted","Data":"1c1f3d551b29dc2b676b80e9255ca4e0851240441cbb494444e9e6221ecbf7d2"} Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.834439 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" podUID="05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" containerName="controller-manager" containerID="cri-o://f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775" gracePeriod=30 Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.835123 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.854813 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6g4k" event={"ID":"4a552cc6-869f-4b5c-a95a-25892b560fa4","Type":"ContainerStarted","Data":"1a67a5607d9953e3f7f552a35b45bdfd00230c64c83e55141717e31e7e103717"} Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.860142 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.882583 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" event={"ID":"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b","Type":"ContainerStarted","Data":"6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425"} Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.882650 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" event={"ID":"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b","Type":"ContainerStarted","Data":"02ed0a0a1789ebca80d1a90462a3f8516ca2d56c62cfd447e2f032bdb16af76c"} Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.882814 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" podUID="4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" containerName="route-controller-manager" containerID="cri-o://6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425" gracePeriod=30 Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.883313 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.912280 4756 patch_prober.go:28] interesting pod/route-controller-manager-7968c8c54-zsgvw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": write tcp 10.217.0.2:53752->10.217.0.66:8443: write: connection reset by peer" start-of-body= Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.912457 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" podUID="4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": write tcp 10.217.0.2:53752->10.217.0.66:8443: write: connection reset by peer" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.920743 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" podStartSLOduration=3.920711298 podStartE2EDuration="3.920711298s" podCreationTimestamp="2026-02-24 00:10:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:46.91171659 +0000 UTC m=+303.822579213" watchObservedRunningTime="2026-02-24 00:10:46.920711298 +0000 UTC m=+303.831573931" Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.953333 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fq6rf"] Feb 24 00:10:46 crc kubenswrapper[4756]: I0224 00:10:46.969566 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" podStartSLOduration=2.969537766 podStartE2EDuration="2.969537766s" podCreationTimestamp="2026-02-24 00:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:46.960185219 +0000 UTC m=+303.871047882" watchObservedRunningTime="2026-02-24 00:10:46.969537766 +0000 UTC m=+303.880400399" Feb 24 00:10:47 crc kubenswrapper[4756]: E0224 00:10:47.004690 4756 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf07722d_ecdf_4f68_8074_fac31ce286a5.slice/crio-conmon-d2bfe8427e125bd9b4a638df1c7d6cff3fd602c71620ae8cca1f059040713424.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf07722d_ecdf_4f68_8074_fac31ce286a5.slice/crio-d2bfe8427e125bd9b4a638df1c7d6cff3fd602c71620ae8cca1f059040713424.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a52130e_ef00_429b_a5fe_9ebaf6c7bb3b.slice/crio-6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a552cc6_869f_4b5c_a95a_25892b560fa4.slice/crio-347a89231ba6a41727126ec5db19bc1cbce78ef77649c1e6fa54be5d8fe8ed3f.scope\": RecentStats: unable to find data in memory cache]" Feb 24 00:10:47 crc kubenswrapper[4756]: W0224 00:10:47.063096 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod167c0e0e_ba56_4452_aed6_fd2857f9f3c7.slice/crio-53ea098d953c55d94a0d1a7b502a18373ddca47695efd022797a5a257cec38f0 WatchSource:0}: Error finding container 53ea098d953c55d94a0d1a7b502a18373ddca47695efd022797a5a257cec38f0: Status 404 returned error can't find the container with id 53ea098d953c55d94a0d1a7b502a18373ddca47695efd022797a5a257cec38f0 Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.300798 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.317396 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.341045 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7758cf9bc8-mdndw"] Feb 24 00:10:47 crc kubenswrapper[4756]: E0224 00:10:47.341380 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" containerName="controller-manager" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.341406 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" containerName="controller-manager" Feb 24 00:10:47 crc kubenswrapper[4756]: E0224 00:10:47.341420 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" containerName="route-controller-manager" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.341429 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" containerName="route-controller-manager" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.341563 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" containerName="controller-manager" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.341580 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" containerName="route-controller-manager" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.342130 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.359791 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7758cf9bc8-mdndw"] Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409443 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-proxy-ca-bundles\") pod \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409506 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kngfn\" (UniqueName: \"kubernetes.io/projected/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-kube-api-access-kngfn\") pod \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409555 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-config\") pod \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409608 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8q44\" (UniqueName: \"kubernetes.io/projected/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-kube-api-access-p8q44\") pod \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409635 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-serving-cert\") pod \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409670 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-serving-cert\") pod \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409708 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-client-ca\") pod \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\" (UID: \"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409733 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-client-ca\") pod \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409768 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-config\") pod \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\" (UID: \"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f\") " Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409952 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c174fd58-b424-4aed-bfbe-c1f86d864fc7-serving-cert\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.409980 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-client-ca\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.410002 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-config\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.410031 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khzr2\" (UniqueName: \"kubernetes.io/projected/c174fd58-b424-4aed-bfbe-c1f86d864fc7-kube-api-access-khzr2\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.410088 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-proxy-ca-bundles\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.411135 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-client-ca" (OuterVolumeSpecName: "client-ca") pod "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" (UID: "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.411503 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-config" (OuterVolumeSpecName: "config") pod "4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" (UID: "4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.411550 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" (UID: "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.412190 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" (UID: "4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.411957 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-config" (OuterVolumeSpecName: "config") pod "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" (UID: "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.418126 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-kube-api-access-kngfn" (OuterVolumeSpecName: "kube-api-access-kngfn") pod "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" (UID: "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f"). InnerVolumeSpecName "kube-api-access-kngfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.418821 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-kube-api-access-p8q44" (OuterVolumeSpecName: "kube-api-access-p8q44") pod "4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" (UID: "4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b"). InnerVolumeSpecName "kube-api-access-p8q44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.421595 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" (UID: "05004c3c-b1c5-418a-8a50-f4ec5e15cf3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.424463 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" (UID: "4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511230 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-client-ca\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511713 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-config\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511757 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khzr2\" (UniqueName: \"kubernetes.io/projected/c174fd58-b424-4aed-bfbe-c1f86d864fc7-kube-api-access-khzr2\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511811 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-proxy-ca-bundles\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511853 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c174fd58-b424-4aed-bfbe-c1f86d864fc7-serving-cert\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511914 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511927 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511938 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511953 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511964 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511978 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kngfn\" (UniqueName: \"kubernetes.io/projected/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-kube-api-access-kngfn\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.511990 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.512005 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8q44\" (UniqueName: \"kubernetes.io/projected/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b-kube-api-access-p8q44\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.512017 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.513927 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-client-ca\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.514079 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-config\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.520417 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c174fd58-b424-4aed-bfbe-c1f86d864fc7-serving-cert\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.521817 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-proxy-ca-bundles\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.531841 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khzr2\" (UniqueName: \"kubernetes.io/projected/c174fd58-b424-4aed-bfbe-c1f86d864fc7-kube-api-access-khzr2\") pod \"controller-manager-7758cf9bc8-mdndw\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.765779 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.896722 4756 generic.go:334] "Generic (PLEG): container finished" podID="4a552cc6-869f-4b5c-a95a-25892b560fa4" containerID="347a89231ba6a41727126ec5db19bc1cbce78ef77649c1e6fa54be5d8fe8ed3f" exitCode=0 Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.896793 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6g4k" event={"ID":"4a552cc6-869f-4b5c-a95a-25892b560fa4","Type":"ContainerDied","Data":"347a89231ba6a41727126ec5db19bc1cbce78ef77649c1e6fa54be5d8fe8ed3f"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.900115 4756 generic.go:334] "Generic (PLEG): container finished" podID="4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" containerID="6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425" exitCode=0 Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.900180 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" event={"ID":"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b","Type":"ContainerDied","Data":"6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.900202 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" event={"ID":"4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b","Type":"ContainerDied","Data":"02ed0a0a1789ebca80d1a90462a3f8516ca2d56c62cfd447e2f032bdb16af76c"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.900222 4756 scope.go:117] "RemoveContainer" containerID="6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.900343 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.906322 4756 generic.go:334] "Generic (PLEG): container finished" podID="bf07722d-ecdf-4f68-8074-fac31ce286a5" containerID="d2bfe8427e125bd9b4a638df1c7d6cff3fd602c71620ae8cca1f059040713424" exitCode=0 Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.906381 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpsx" event={"ID":"bf07722d-ecdf-4f68-8074-fac31ce286a5","Type":"ContainerDied","Data":"d2bfe8427e125bd9b4a638df1c7d6cff3fd602c71620ae8cca1f059040713424"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.909470 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dz9sx" event={"ID":"7b06bc00-ef2e-451a-b07d-da301f20df31","Type":"ContainerStarted","Data":"d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.915917 4756 generic.go:334] "Generic (PLEG): container finished" podID="167c0e0e-ba56-4452-aed6-fd2857f9f3c7" containerID="9b6e6ee22debbf265f5eae0a6958c4ece67259fa7ee15cc66aa5703f84184771" exitCode=0 Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.916013 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq6rf" event={"ID":"167c0e0e-ba56-4452-aed6-fd2857f9f3c7","Type":"ContainerDied","Data":"9b6e6ee22debbf265f5eae0a6958c4ece67259fa7ee15cc66aa5703f84184771"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.916058 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq6rf" event={"ID":"167c0e0e-ba56-4452-aed6-fd2857f9f3c7","Type":"ContainerStarted","Data":"53ea098d953c55d94a0d1a7b502a18373ddca47695efd022797a5a257cec38f0"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.921044 4756 generic.go:334] "Generic (PLEG): container finished" podID="05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" containerID="f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775" exitCode=0 Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.921111 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" event={"ID":"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f","Type":"ContainerDied","Data":"f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.921290 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" event={"ID":"05004c3c-b1c5-418a-8a50-f4ec5e15cf3f","Type":"ContainerDied","Data":"1c1f3d551b29dc2b676b80e9255ca4e0851240441cbb494444e9e6221ecbf7d2"} Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.921362 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cb74f4b-l5ldw" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.930124 4756 scope.go:117] "RemoveContainer" containerID="6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425" Feb 24 00:10:47 crc kubenswrapper[4756]: E0224 00:10:47.930856 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425\": container with ID starting with 6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425 not found: ID does not exist" containerID="6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.930891 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425"} err="failed to get container status \"6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425\": rpc error: code = NotFound desc = could not find container \"6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425\": container with ID starting with 6e5b3821da24ee2ba07136f93f67f91a531b3978bbc4943cfc8fe7e98511a425 not found: ID does not exist" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.930914 4756 scope.go:117] "RemoveContainer" containerID="f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.937893 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dz9sx" podStartSLOduration=2.315115615 podStartE2EDuration="4.937871771s" podCreationTimestamp="2026-02-24 00:10:43 +0000 UTC" firstStartedPulling="2026-02-24 00:10:44.717921411 +0000 UTC m=+301.628784054" lastFinishedPulling="2026-02-24 00:10:47.340677577 +0000 UTC m=+304.251540210" observedRunningTime="2026-02-24 00:10:47.936003403 +0000 UTC m=+304.846866026" watchObservedRunningTime="2026-02-24 00:10:47.937871771 +0000 UTC m=+304.848734404" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.973318 4756 scope.go:117] "RemoveContainer" containerID="f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775" Feb 24 00:10:47 crc kubenswrapper[4756]: E0224 00:10:47.973865 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775\": container with ID starting with f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775 not found: ID does not exist" containerID="f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.973923 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775"} err="failed to get container status \"f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775\": rpc error: code = NotFound desc = could not find container \"f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775\": container with ID starting with f3e63267f26bac00d52cc6d018ade5bfbfff56610beb30b1c03b4f43fc764775 not found: ID does not exist" Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.988997 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw"] Feb 24 00:10:47 crc kubenswrapper[4756]: I0224 00:10:47.996186 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7968c8c54-zsgvw"] Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.025240 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-869cb74f4b-l5ldw"] Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.028438 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-869cb74f4b-l5ldw"] Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.231926 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7758cf9bc8-mdndw"] Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.308706 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwc9"] Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.310802 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.320528 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-catalog-content\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.320581 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-utilities\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.320633 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvwq\" (UniqueName: \"kubernetes.io/projected/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-kube-api-access-nmvwq\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.323936 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwc9"] Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.421730 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-catalog-content\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.422254 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-utilities\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.422318 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmvwq\" (UniqueName: \"kubernetes.io/projected/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-kube-api-access-nmvwq\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.422745 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-catalog-content\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.422821 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-utilities\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.445411 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmvwq\" (UniqueName: \"kubernetes.io/projected/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-kube-api-access-nmvwq\") pod \"redhat-marketplace-9cwc9\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.506826 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zpsd2"] Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.508132 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.517613 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpsd2"] Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.544632 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd46f43-d695-4e45-9396-78c6f5f64a89-catalog-content\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.544787 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd46f43-d695-4e45-9396-78c6f5f64a89-utilities\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.544827 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srx62\" (UniqueName: \"kubernetes.io/projected/1bd46f43-d695-4e45-9396-78c6f5f64a89-kube-api-access-srx62\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.643758 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.645931 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd46f43-d695-4e45-9396-78c6f5f64a89-utilities\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.645979 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srx62\" (UniqueName: \"kubernetes.io/projected/1bd46f43-d695-4e45-9396-78c6f5f64a89-kube-api-access-srx62\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.646033 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd46f43-d695-4e45-9396-78c6f5f64a89-catalog-content\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.646624 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd46f43-d695-4e45-9396-78c6f5f64a89-catalog-content\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.646734 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd46f43-d695-4e45-9396-78c6f5f64a89-utilities\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.673738 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srx62\" (UniqueName: \"kubernetes.io/projected/1bd46f43-d695-4e45-9396-78c6f5f64a89-kube-api-access-srx62\") pod \"redhat-operators-zpsd2\" (UID: \"1bd46f43-d695-4e45-9396-78c6f5f64a89\") " pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.845740 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.944565 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ddpsx" event={"ID":"bf07722d-ecdf-4f68-8074-fac31ce286a5","Type":"ContainerStarted","Data":"b9f6ad4c3c2f4f2d84b38c97df21afc7b195bdd722f5d6f86597e8b9fcd5183a"} Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.949000 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq6rf" event={"ID":"167c0e0e-ba56-4452-aed6-fd2857f9f3c7","Type":"ContainerStarted","Data":"808ebce6025da672cad2702a709da8ba8fef1e48b9f44991fe04ab32ed0d6ac8"} Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.958329 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" event={"ID":"c174fd58-b424-4aed-bfbe-c1f86d864fc7","Type":"ContainerStarted","Data":"ad45a48117929b22f587437fddd5dbe95bb5a7c7d7b17c15df70709a868d6c4d"} Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.958377 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.958387 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" event={"ID":"c174fd58-b424-4aed-bfbe-c1f86d864fc7","Type":"ContainerStarted","Data":"d8714a3d52bd565e98f8d0144dd64ac5b2841cbcee88a4b38570b55b1f2bfc0a"} Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.967927 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.978770 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ddpsx" podStartSLOduration=2.36031371 podStartE2EDuration="5.978752075s" podCreationTimestamp="2026-02-24 00:10:43 +0000 UTC" firstStartedPulling="2026-02-24 00:10:44.715367126 +0000 UTC m=+301.626229759" lastFinishedPulling="2026-02-24 00:10:48.333805491 +0000 UTC m=+305.244668124" observedRunningTime="2026-02-24 00:10:48.975494993 +0000 UTC m=+305.886357636" watchObservedRunningTime="2026-02-24 00:10:48.978752075 +0000 UTC m=+305.889614708" Feb 24 00:10:48 crc kubenswrapper[4756]: I0224 00:10:48.999559 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" podStartSLOduration=2.999535492 podStartE2EDuration="2.999535492s" podCreationTimestamp="2026-02-24 00:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:48.998330352 +0000 UTC m=+305.909192995" watchObservedRunningTime="2026-02-24 00:10:48.999535492 +0000 UTC m=+305.910398125" Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.138548 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwc9"] Feb 24 00:10:49 crc kubenswrapper[4756]: W0224 00:10:49.141718 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14871dd9_6e40_4815_96fb_36f1dbbe2a1b.slice/crio-93c6bad91fa4a64c89b24d39567acf5dd14633c756a72e72b02f1c608115d60f WatchSource:0}: Error finding container 93c6bad91fa4a64c89b24d39567acf5dd14633c756a72e72b02f1c608115d60f: Status 404 returned error can't find the container with id 93c6bad91fa4a64c89b24d39567acf5dd14633c756a72e72b02f1c608115d60f Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.183982 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zpsd2"] Feb 24 00:10:49 crc kubenswrapper[4756]: W0224 00:10:49.194203 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bd46f43_d695_4e45_9396_78c6f5f64a89.slice/crio-328bbde18191e483d967da4838736a74b9a1bc2e5fa51b6ba7fdfacb1aab2690 WatchSource:0}: Error finding container 328bbde18191e483d967da4838736a74b9a1bc2e5fa51b6ba7fdfacb1aab2690: Status 404 returned error can't find the container with id 328bbde18191e483d967da4838736a74b9a1bc2e5fa51b6ba7fdfacb1aab2690 Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.841843 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05004c3c-b1c5-418a-8a50-f4ec5e15cf3f" path="/var/lib/kubelet/pods/05004c3c-b1c5-418a-8a50-f4ec5e15cf3f/volumes" Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.842524 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b" path="/var/lib/kubelet/pods/4a52130e-ef00-429b-a5fe-9ebaf6c7bb3b/volumes" Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.965320 4756 generic.go:334] "Generic (PLEG): container finished" podID="4a552cc6-869f-4b5c-a95a-25892b560fa4" containerID="59ef91303e7f74dc930e935b89533534ee7d4cfb504994728c67c6b3273defe2" exitCode=0 Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.965416 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6g4k" event={"ID":"4a552cc6-869f-4b5c-a95a-25892b560fa4","Type":"ContainerDied","Data":"59ef91303e7f74dc930e935b89533534ee7d4cfb504994728c67c6b3273defe2"} Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.968268 4756 generic.go:334] "Generic (PLEG): container finished" podID="1bd46f43-d695-4e45-9396-78c6f5f64a89" containerID="7c4a5c50dcf74506f4456175857d1c16911f0aec5ab0a4df6c6a2ce4e05bf9f7" exitCode=0 Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.968376 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpsd2" event={"ID":"1bd46f43-d695-4e45-9396-78c6f5f64a89","Type":"ContainerDied","Data":"7c4a5c50dcf74506f4456175857d1c16911f0aec5ab0a4df6c6a2ce4e05bf9f7"} Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.968411 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpsd2" event={"ID":"1bd46f43-d695-4e45-9396-78c6f5f64a89","Type":"ContainerStarted","Data":"328bbde18191e483d967da4838736a74b9a1bc2e5fa51b6ba7fdfacb1aab2690"} Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.971513 4756 generic.go:334] "Generic (PLEG): container finished" podID="167c0e0e-ba56-4452-aed6-fd2857f9f3c7" containerID="808ebce6025da672cad2702a709da8ba8fef1e48b9f44991fe04ab32ed0d6ac8" exitCode=0 Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.971579 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq6rf" event={"ID":"167c0e0e-ba56-4452-aed6-fd2857f9f3c7","Type":"ContainerDied","Data":"808ebce6025da672cad2702a709da8ba8fef1e48b9f44991fe04ab32ed0d6ac8"} Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.973047 4756 generic.go:334] "Generic (PLEG): container finished" podID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerID="51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e" exitCode=0 Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.973161 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwc9" event={"ID":"14871dd9-6e40-4815-96fb-36f1dbbe2a1b","Type":"ContainerDied","Data":"51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e"} Feb 24 00:10:49 crc kubenswrapper[4756]: I0224 00:10:49.973194 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwc9" event={"ID":"14871dd9-6e40-4815-96fb-36f1dbbe2a1b","Type":"ContainerStarted","Data":"93c6bad91fa4a64c89b24d39567acf5dd14633c756a72e72b02f1c608115d60f"} Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.138507 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h"] Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.139499 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.142267 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.143284 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.143355 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.144120 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.144162 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.149559 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.155310 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h"] Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.269147 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-config\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.269243 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f1e42e-f443-4acd-b244-8daa97958510-serving-cert\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.269278 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-client-ca\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.269365 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzkng\" (UniqueName: \"kubernetes.io/projected/94f1e42e-f443-4acd-b244-8daa97958510-kube-api-access-dzkng\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.370824 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-config\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.370899 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f1e42e-f443-4acd-b244-8daa97958510-serving-cert\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.370924 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-client-ca\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.370961 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzkng\" (UniqueName: \"kubernetes.io/projected/94f1e42e-f443-4acd-b244-8daa97958510-kube-api-access-dzkng\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.372558 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-config\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.373577 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-client-ca\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.392825 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f1e42e-f443-4acd-b244-8daa97958510-serving-cert\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.393479 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzkng\" (UniqueName: \"kubernetes.io/projected/94f1e42e-f443-4acd-b244-8daa97958510-kube-api-access-dzkng\") pod \"route-controller-manager-5fc945944d-6fx4h\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.457753 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.717569 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s6swn"] Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.720844 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.734326 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6swn"] Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.878600 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-utilities\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.878913 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-catalog-content\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.879037 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v7ss\" (UniqueName: \"kubernetes.io/projected/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-kube-api-access-8v7ss\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.915319 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2gzm"] Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.917206 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.959057 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2gzm"] Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.980871 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-catalog-content\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.980992 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v7ss\" (UniqueName: \"kubernetes.io/projected/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-kube-api-access-8v7ss\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.981042 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-utilities\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.981619 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-utilities\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.981715 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-catalog-content\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:50 crc kubenswrapper[4756]: I0224 00:10:50.990664 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpsd2" event={"ID":"1bd46f43-d695-4e45-9396-78c6f5f64a89","Type":"ContainerStarted","Data":"f9e639b2081499d630bf15c4d383b4509f203175f3adb4de2bd68e2a1b17d8be"} Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.002516 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fq6rf" event={"ID":"167c0e0e-ba56-4452-aed6-fd2857f9f3c7","Type":"ContainerStarted","Data":"2c3039a8bb35b75c91052d6ea46e435edc810b04de327259d204ff634db7ea0c"} Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.010221 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v7ss\" (UniqueName: \"kubernetes.io/projected/6c6e373e-56f6-40e1-94b4-d9c4116b0f9f-kube-api-access-8v7ss\") pod \"certified-operators-s6swn\" (UID: \"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f\") " pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.012159 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwc9" event={"ID":"14871dd9-6e40-4815-96fb-36f1dbbe2a1b","Type":"ContainerStarted","Data":"77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd"} Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.020323 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6g4k" event={"ID":"4a552cc6-869f-4b5c-a95a-25892b560fa4","Type":"ContainerStarted","Data":"38058d73bbb28fb604e9feefa56ff8d98c57030bf406094b895a5d2ad4f10c9f"} Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.026307 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h"] Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.059529 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z6g4k" podStartSLOduration=3.546334731 podStartE2EDuration="6.059489117s" podCreationTimestamp="2026-02-24 00:10:45 +0000 UTC" firstStartedPulling="2026-02-24 00:10:47.899107428 +0000 UTC m=+304.809970061" lastFinishedPulling="2026-02-24 00:10:50.412261824 +0000 UTC m=+307.323124447" observedRunningTime="2026-02-24 00:10:51.053912175 +0000 UTC m=+307.964774808" watchObservedRunningTime="2026-02-24 00:10:51.059489117 +0000 UTC m=+307.970351760" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.062801 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.082180 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d0e11e-04f3-4913-9b33-be7aeb27232b-utilities\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.082257 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-244jq\" (UniqueName: \"kubernetes.io/projected/23d0e11e-04f3-4913-9b33-be7aeb27232b-kube-api-access-244jq\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.082291 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d0e11e-04f3-4913-9b33-be7aeb27232b-catalog-content\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.106020 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fq6rf" podStartSLOduration=2.684965045 podStartE2EDuration="5.105999516s" podCreationTimestamp="2026-02-24 00:10:46 +0000 UTC" firstStartedPulling="2026-02-24 00:10:47.930114254 +0000 UTC m=+304.840976887" lastFinishedPulling="2026-02-24 00:10:50.351148725 +0000 UTC m=+307.262011358" observedRunningTime="2026-02-24 00:10:51.104441437 +0000 UTC m=+308.015304090" watchObservedRunningTime="2026-02-24 00:10:51.105999516 +0000 UTC m=+308.016862159" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.185748 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d0e11e-04f3-4913-9b33-be7aeb27232b-utilities\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.185910 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-244jq\" (UniqueName: \"kubernetes.io/projected/23d0e11e-04f3-4913-9b33-be7aeb27232b-kube-api-access-244jq\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.185966 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d0e11e-04f3-4913-9b33-be7aeb27232b-catalog-content\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.188466 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23d0e11e-04f3-4913-9b33-be7aeb27232b-utilities\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.188740 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23d0e11e-04f3-4913-9b33-be7aeb27232b-catalog-content\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.223963 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-244jq\" (UniqueName: \"kubernetes.io/projected/23d0e11e-04f3-4913-9b33-be7aeb27232b-kube-api-access-244jq\") pod \"community-operators-x2gzm\" (UID: \"23d0e11e-04f3-4913-9b33-be7aeb27232b\") " pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.235759 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.253470 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.254783 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.311175 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.449892 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.449960 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.511606 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.541928 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s6swn"] Feb 24 00:10:51 crc kubenswrapper[4756]: W0224 00:10:51.554358 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c6e373e_56f6_40e1_94b4_d9c4116b0f9f.slice/crio-ad114d433ffef4ba4ff65008041544ea2bc8ecb0f958b3e4a68bec62ad261d56 WatchSource:0}: Error finding container ad114d433ffef4ba4ff65008041544ea2bc8ecb0f958b3e4a68bec62ad261d56: Status 404 returned error can't find the container with id ad114d433ffef4ba4ff65008041544ea2bc8ecb0f958b3e4a68bec62ad261d56 Feb 24 00:10:51 crc kubenswrapper[4756]: I0224 00:10:51.722534 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2gzm"] Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.028637 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" event={"ID":"94f1e42e-f443-4acd-b244-8daa97958510","Type":"ContainerStarted","Data":"95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2"} Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.028733 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.029153 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" event={"ID":"94f1e42e-f443-4acd-b244-8daa97958510","Type":"ContainerStarted","Data":"acd171196c1962bd71e1c8ca3f77401cb66b8d4ac1cff1ad8d1feb928c26614d"} Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.030656 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2gzm" event={"ID":"23d0e11e-04f3-4913-9b33-be7aeb27232b","Type":"ContainerStarted","Data":"5504efd947f2c1823a6ae4ef780e9c29029a7140b6ef853a55378726743132ae"} Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.030724 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2gzm" event={"ID":"23d0e11e-04f3-4913-9b33-be7aeb27232b","Type":"ContainerStarted","Data":"47bb1eb63f928a5adc8bf02e90b5f658de9b67f8cb4229570252709c23007654"} Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.032569 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6swn" event={"ID":"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f","Type":"ContainerStarted","Data":"8036e9fad971308a6c8df29c32984701f28bf52503a0d3e66a417ae211821697"} Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.032622 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6swn" event={"ID":"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f","Type":"ContainerStarted","Data":"ad114d433ffef4ba4ff65008041544ea2bc8ecb0f958b3e4a68bec62ad261d56"} Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.034980 4756 generic.go:334] "Generic (PLEG): container finished" podID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerID="77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd" exitCode=0 Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.035161 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwc9" event={"ID":"14871dd9-6e40-4815-96fb-36f1dbbe2a1b","Type":"ContainerDied","Data":"77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd"} Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.036830 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.056957 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" podStartSLOduration=6.056930209 podStartE2EDuration="6.056930209s" podCreationTimestamp="2026-02-24 00:10:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:10:52.054811786 +0000 UTC m=+308.965674419" watchObservedRunningTime="2026-02-24 00:10:52.056930209 +0000 UTC m=+308.967792842" Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.110483 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tjbbr" Feb 24 00:10:52 crc kubenswrapper[4756]: I0224 00:10:52.112337 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-44tfc" Feb 24 00:10:53 crc kubenswrapper[4756]: I0224 00:10:53.044960 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwc9" event={"ID":"14871dd9-6e40-4815-96fb-36f1dbbe2a1b","Type":"ContainerStarted","Data":"ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c"} Feb 24 00:10:53 crc kubenswrapper[4756]: I0224 00:10:53.080288 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9cwc9" podStartSLOduration=2.492227703 podStartE2EDuration="5.080265469s" podCreationTimestamp="2026-02-24 00:10:48 +0000 UTC" firstStartedPulling="2026-02-24 00:10:49.974157656 +0000 UTC m=+306.885020289" lastFinishedPulling="2026-02-24 00:10:52.562195422 +0000 UTC m=+309.473058055" observedRunningTime="2026-02-24 00:10:53.076228477 +0000 UTC m=+309.987091110" watchObservedRunningTime="2026-02-24 00:10:53.080265469 +0000 UTC m=+309.991128102" Feb 24 00:10:53 crc kubenswrapper[4756]: I0224 00:10:53.650051 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:53 crc kubenswrapper[4756]: I0224 00:10:53.650112 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:53 crc kubenswrapper[4756]: I0224 00:10:53.687476 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:53 crc kubenswrapper[4756]: I0224 00:10:53.915968 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:53 crc kubenswrapper[4756]: I0224 00:10:53.916147 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:10:54 crc kubenswrapper[4756]: I0224 00:10:54.052610 4756 generic.go:334] "Generic (PLEG): container finished" podID="6c6e373e-56f6-40e1-94b4-d9c4116b0f9f" containerID="8036e9fad971308a6c8df29c32984701f28bf52503a0d3e66a417ae211821697" exitCode=0 Feb 24 00:10:54 crc kubenswrapper[4756]: I0224 00:10:54.052685 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6swn" event={"ID":"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f","Type":"ContainerDied","Data":"8036e9fad971308a6c8df29c32984701f28bf52503a0d3e66a417ae211821697"} Feb 24 00:10:54 crc kubenswrapper[4756]: I0224 00:10:54.056636 4756 generic.go:334] "Generic (PLEG): container finished" podID="1bd46f43-d695-4e45-9396-78c6f5f64a89" containerID="f9e639b2081499d630bf15c4d383b4509f203175f3adb4de2bd68e2a1b17d8be" exitCode=0 Feb 24 00:10:54 crc kubenswrapper[4756]: I0224 00:10:54.056706 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpsd2" event={"ID":"1bd46f43-d695-4e45-9396-78c6f5f64a89","Type":"ContainerDied","Data":"f9e639b2081499d630bf15c4d383b4509f203175f3adb4de2bd68e2a1b17d8be"} Feb 24 00:10:54 crc kubenswrapper[4756]: I0224 00:10:54.058399 4756 generic.go:334] "Generic (PLEG): container finished" podID="23d0e11e-04f3-4913-9b33-be7aeb27232b" containerID="5504efd947f2c1823a6ae4ef780e9c29029a7140b6ef853a55378726743132ae" exitCode=0 Feb 24 00:10:54 crc kubenswrapper[4756]: I0224 00:10:54.058511 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2gzm" event={"ID":"23d0e11e-04f3-4913-9b33-be7aeb27232b","Type":"ContainerDied","Data":"5504efd947f2c1823a6ae4ef780e9c29029a7140b6ef853a55378726743132ae"} Feb 24 00:10:54 crc kubenswrapper[4756]: I0224 00:10:54.112322 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:10:54 crc kubenswrapper[4756]: I0224 00:10:54.955902 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ddpsx" podUID="bf07722d-ecdf-4f68-8074-fac31ce286a5" containerName="registry-server" probeResult="failure" output=< Feb 24 00:10:54 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Feb 24 00:10:54 crc kubenswrapper[4756]: > Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.070572 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zpsd2" event={"ID":"1bd46f43-d695-4e45-9396-78c6f5f64a89","Type":"ContainerStarted","Data":"2e25b0d695a827225d0f118491d145acb516cfdb4d5bb84c7cc630223b15f209"} Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.073397 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2gzm" event={"ID":"23d0e11e-04f3-4913-9b33-be7aeb27232b","Type":"ContainerStarted","Data":"836ed859161f80e1f728e32b9e8e91e50feb82fc730cdf2307120bd3c07af1fd"} Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.096134 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zpsd2" podStartSLOduration=3.018854337 podStartE2EDuration="8.096105262s" podCreationTimestamp="2026-02-24 00:10:48 +0000 UTC" firstStartedPulling="2026-02-24 00:10:49.969800676 +0000 UTC m=+306.880663309" lastFinishedPulling="2026-02-24 00:10:55.047051601 +0000 UTC m=+311.957914234" observedRunningTime="2026-02-24 00:10:56.092408929 +0000 UTC m=+313.003271582" watchObservedRunningTime="2026-02-24 00:10:56.096105262 +0000 UTC m=+313.006967895" Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.250721 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.250928 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.288287 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.455487 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.455578 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:56 crc kubenswrapper[4756]: I0224 00:10:56.521184 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:57 crc kubenswrapper[4756]: I0224 00:10:57.081898 4756 generic.go:334] "Generic (PLEG): container finished" podID="23d0e11e-04f3-4913-9b33-be7aeb27232b" containerID="836ed859161f80e1f728e32b9e8e91e50feb82fc730cdf2307120bd3c07af1fd" exitCode=0 Feb 24 00:10:57 crc kubenswrapper[4756]: I0224 00:10:57.081976 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2gzm" event={"ID":"23d0e11e-04f3-4913-9b33-be7aeb27232b","Type":"ContainerDied","Data":"836ed859161f80e1f728e32b9e8e91e50feb82fc730cdf2307120bd3c07af1fd"} Feb 24 00:10:57 crc kubenswrapper[4756]: I0224 00:10:57.134606 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fq6rf" Feb 24 00:10:57 crc kubenswrapper[4756]: I0224 00:10:57.155476 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z6g4k" Feb 24 00:10:58 crc kubenswrapper[4756]: I0224 00:10:58.644080 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:58 crc kubenswrapper[4756]: I0224 00:10:58.644153 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:58 crc kubenswrapper[4756]: I0224 00:10:58.719762 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:58 crc kubenswrapper[4756]: I0224 00:10:58.847150 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:58 crc kubenswrapper[4756]: I0224 00:10:58.847607 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:10:59 crc kubenswrapper[4756]: I0224 00:10:59.152238 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:10:59 crc kubenswrapper[4756]: I0224 00:10:59.903583 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zpsd2" podUID="1bd46f43-d695-4e45-9396-78c6f5f64a89" containerName="registry-server" probeResult="failure" output=< Feb 24 00:10:59 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Feb 24 00:10:59 crc kubenswrapper[4756]: > Feb 24 00:11:03 crc kubenswrapper[4756]: I0224 00:11:03.950403 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7758cf9bc8-mdndw"] Feb 24 00:11:03 crc kubenswrapper[4756]: I0224 00:11:03.951500 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" podUID="c174fd58-b424-4aed-bfbe-c1f86d864fc7" containerName="controller-manager" containerID="cri-o://ad45a48117929b22f587437fddd5dbe95bb5a7c7d7b17c15df70709a868d6c4d" gracePeriod=30 Feb 24 00:11:03 crc kubenswrapper[4756]: I0224 00:11:03.968544 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h"] Feb 24 00:11:03 crc kubenswrapper[4756]: I0224 00:11:03.968949 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" podUID="94f1e42e-f443-4acd-b244-8daa97958510" containerName="route-controller-manager" containerID="cri-o://95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2" gracePeriod=30 Feb 24 00:11:03 crc kubenswrapper[4756]: I0224 00:11:03.989988 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:11:04 crc kubenswrapper[4756]: I0224 00:11:04.034965 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ddpsx" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.122468 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.131303 4756 generic.go:334] "Generic (PLEG): container finished" podID="c174fd58-b424-4aed-bfbe-c1f86d864fc7" containerID="ad45a48117929b22f587437fddd5dbe95bb5a7c7d7b17c15df70709a868d6c4d" exitCode=0 Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.131396 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" event={"ID":"c174fd58-b424-4aed-bfbe-c1f86d864fc7","Type":"ContainerDied","Data":"ad45a48117929b22f587437fddd5dbe95bb5a7c7d7b17c15df70709a868d6c4d"} Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.135551 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2gzm" event={"ID":"23d0e11e-04f3-4913-9b33-be7aeb27232b","Type":"ContainerStarted","Data":"32a9b50ee75d543eecb92bde294df34cdd09bdc31c553de58aa7a5b978f5415f"} Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.138807 4756 generic.go:334] "Generic (PLEG): container finished" podID="6c6e373e-56f6-40e1-94b4-d9c4116b0f9f" containerID="afe1cdc13043a4627804a196c76777bdf3d663ee1a6780993166d033cdbca254" exitCode=0 Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.138864 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6swn" event={"ID":"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f","Type":"ContainerDied","Data":"afe1cdc13043a4627804a196c76777bdf3d663ee1a6780993166d033cdbca254"} Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.148468 4756 generic.go:334] "Generic (PLEG): container finished" podID="94f1e42e-f443-4acd-b244-8daa97958510" containerID="95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2" exitCode=0 Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.148521 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" event={"ID":"94f1e42e-f443-4acd-b244-8daa97958510","Type":"ContainerDied","Data":"95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2"} Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.148550 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" event={"ID":"94f1e42e-f443-4acd-b244-8daa97958510","Type":"ContainerDied","Data":"acd171196c1962bd71e1c8ca3f77401cb66b8d4ac1cff1ad8d1feb928c26614d"} Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.148570 4756 scope.go:117] "RemoveContainer" containerID="95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.148756 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.184595 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2gzm" podStartSLOduration=4.724508202 podStartE2EDuration="15.184559331s" podCreationTimestamp="2026-02-24 00:10:50 +0000 UTC" firstStartedPulling="2026-02-24 00:10:54.06229677 +0000 UTC m=+310.973159403" lastFinishedPulling="2026-02-24 00:11:04.522347869 +0000 UTC m=+321.433210532" observedRunningTime="2026-02-24 00:11:05.182251472 +0000 UTC m=+322.093114105" watchObservedRunningTime="2026-02-24 00:11:05.184559331 +0000 UTC m=+322.095421974" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.189012 4756 scope.go:117] "RemoveContainer" containerID="95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2" Feb 24 00:11:05 crc kubenswrapper[4756]: E0224 00:11:05.189528 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2\": container with ID starting with 95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2 not found: ID does not exist" containerID="95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.189596 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2"} err="failed to get container status \"95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2\": rpc error: code = NotFound desc = could not find container \"95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2\": container with ID starting with 95f442f0da99356443ff58fa524187f78bbfe959e0107bd0ed65602eca6e07e2 not found: ID does not exist" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.204767 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.207689 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz"] Feb 24 00:11:05 crc kubenswrapper[4756]: E0224 00:11:05.207949 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c174fd58-b424-4aed-bfbe-c1f86d864fc7" containerName="controller-manager" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.207966 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c174fd58-b424-4aed-bfbe-c1f86d864fc7" containerName="controller-manager" Feb 24 00:11:05 crc kubenswrapper[4756]: E0224 00:11:05.207992 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f1e42e-f443-4acd-b244-8daa97958510" containerName="route-controller-manager" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.208001 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f1e42e-f443-4acd-b244-8daa97958510" containerName="route-controller-manager" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.208136 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f1e42e-f443-4acd-b244-8daa97958510" containerName="route-controller-manager" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.208155 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c174fd58-b424-4aed-bfbe-c1f86d864fc7" containerName="controller-manager" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.208549 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.219593 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz"] Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.235618 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f1e42e-f443-4acd-b244-8daa97958510-serving-cert\") pod \"94f1e42e-f443-4acd-b244-8daa97958510\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.235692 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzkng\" (UniqueName: \"kubernetes.io/projected/94f1e42e-f443-4acd-b244-8daa97958510-kube-api-access-dzkng\") pod \"94f1e42e-f443-4acd-b244-8daa97958510\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.235726 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-client-ca\") pod \"94f1e42e-f443-4acd-b244-8daa97958510\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.235747 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-config\") pod \"94f1e42e-f443-4acd-b244-8daa97958510\" (UID: \"94f1e42e-f443-4acd-b244-8daa97958510\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.236844 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-client-ca" (OuterVolumeSpecName: "client-ca") pod "94f1e42e-f443-4acd-b244-8daa97958510" (UID: "94f1e42e-f443-4acd-b244-8daa97958510"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.236873 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-config" (OuterVolumeSpecName: "config") pod "94f1e42e-f443-4acd-b244-8daa97958510" (UID: "94f1e42e-f443-4acd-b244-8daa97958510"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.249522 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f1e42e-f443-4acd-b244-8daa97958510-kube-api-access-dzkng" (OuterVolumeSpecName: "kube-api-access-dzkng") pod "94f1e42e-f443-4acd-b244-8daa97958510" (UID: "94f1e42e-f443-4acd-b244-8daa97958510"). InnerVolumeSpecName "kube-api-access-dzkng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.252178 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f1e42e-f443-4acd-b244-8daa97958510-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "94f1e42e-f443-4acd-b244-8daa97958510" (UID: "94f1e42e-f443-4acd-b244-8daa97958510"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.336645 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-config\") pod \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337136 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c174fd58-b424-4aed-bfbe-c1f86d864fc7-serving-cert\") pod \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337214 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khzr2\" (UniqueName: \"kubernetes.io/projected/c174fd58-b424-4aed-bfbe-c1f86d864fc7-kube-api-access-khzr2\") pod \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337249 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-client-ca\") pod \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337303 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-proxy-ca-bundles\") pod \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\" (UID: \"c174fd58-b424-4aed-bfbe-c1f86d864fc7\") " Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337440 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-config\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337505 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-client-ca\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337578 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-serving-cert\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337610 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvk9m\" (UniqueName: \"kubernetes.io/projected/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-kube-api-access-tvk9m\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337649 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94f1e42e-f443-4acd-b244-8daa97958510-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337661 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzkng\" (UniqueName: \"kubernetes.io/projected/94f1e42e-f443-4acd-b244-8daa97958510-kube-api-access-dzkng\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337671 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337680 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94f1e42e-f443-4acd-b244-8daa97958510-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337669 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-config" (OuterVolumeSpecName: "config") pod "c174fd58-b424-4aed-bfbe-c1f86d864fc7" (UID: "c174fd58-b424-4aed-bfbe-c1f86d864fc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.337914 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-client-ca" (OuterVolumeSpecName: "client-ca") pod "c174fd58-b424-4aed-bfbe-c1f86d864fc7" (UID: "c174fd58-b424-4aed-bfbe-c1f86d864fc7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.338157 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c174fd58-b424-4aed-bfbe-c1f86d864fc7" (UID: "c174fd58-b424-4aed-bfbe-c1f86d864fc7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.339815 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c174fd58-b424-4aed-bfbe-c1f86d864fc7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c174fd58-b424-4aed-bfbe-c1f86d864fc7" (UID: "c174fd58-b424-4aed-bfbe-c1f86d864fc7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.339988 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c174fd58-b424-4aed-bfbe-c1f86d864fc7-kube-api-access-khzr2" (OuterVolumeSpecName: "kube-api-access-khzr2") pod "c174fd58-b424-4aed-bfbe-c1f86d864fc7" (UID: "c174fd58-b424-4aed-bfbe-c1f86d864fc7"). InnerVolumeSpecName "kube-api-access-khzr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.439121 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-serving-cert\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.439188 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvk9m\" (UniqueName: \"kubernetes.io/projected/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-kube-api-access-tvk9m\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.439220 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-config\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.439257 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-client-ca\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.439318 4756 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c174fd58-b424-4aed-bfbe-c1f86d864fc7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.439907 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khzr2\" (UniqueName: \"kubernetes.io/projected/c174fd58-b424-4aed-bfbe-c1f86d864fc7-kube-api-access-khzr2\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.439978 4756 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.440002 4756 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.440024 4756 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c174fd58-b424-4aed-bfbe-c1f86d864fc7-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.441801 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-client-ca\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.441907 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-config\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.447191 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-serving-cert\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.460375 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvk9m\" (UniqueName: \"kubernetes.io/projected/2ff7cb12-7fb1-413a-9ab5-fc352436fe8f-kube-api-access-tvk9m\") pod \"route-controller-manager-867d7d6955-qzhwz\" (UID: \"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f\") " pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.485232 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h"] Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.489432 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc945944d-6fx4h"] Feb 24 00:11:05 crc kubenswrapper[4756]: I0224 00:11:05.524654 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:05.842247 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f1e42e-f443-4acd-b244-8daa97958510" path="/var/lib/kubelet/pods/94f1e42e-f443-4acd-b244-8daa97958510/volumes" Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:05.936221 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz"] Feb 24 00:11:07 crc kubenswrapper[4756]: W0224 00:11:05.943077 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ff7cb12_7fb1_413a_9ab5_fc352436fe8f.slice/crio-5cd17e6f6cdae86df2fdbaa5fae5f0ed1920bfead4e6c164f9f6adb90c54d2ba WatchSource:0}: Error finding container 5cd17e6f6cdae86df2fdbaa5fae5f0ed1920bfead4e6c164f9f6adb90c54d2ba: Status 404 returned error can't find the container with id 5cd17e6f6cdae86df2fdbaa5fae5f0ed1920bfead4e6c164f9f6adb90c54d2ba Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:06.156316 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" event={"ID":"c174fd58-b424-4aed-bfbe-c1f86d864fc7","Type":"ContainerDied","Data":"d8714a3d52bd565e98f8d0144dd64ac5b2841cbcee88a4b38570b55b1f2bfc0a"} Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:06.156716 4756 scope.go:117] "RemoveContainer" containerID="ad45a48117929b22f587437fddd5dbe95bb5a7c7d7b17c15df70709a868d6c4d" Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:06.156332 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7758cf9bc8-mdndw" Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:06.157688 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" event={"ID":"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f","Type":"ContainerStarted","Data":"5cd17e6f6cdae86df2fdbaa5fae5f0ed1920bfead4e6c164f9f6adb90c54d2ba"} Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:06.185996 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7758cf9bc8-mdndw"] Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:06.205964 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7758cf9bc8-mdndw"] Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:07.168474 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" event={"ID":"2ff7cb12-7fb1-413a-9ab5-fc352436fe8f","Type":"ContainerStarted","Data":"dff26f085d40f9037a1d99825c08fe8f3d959e7c979a41cba6d34c6042ce1086"} Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:07.170116 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:07.178780 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:07.179922 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s6swn" event={"ID":"6c6e373e-56f6-40e1-94b4-d9c4116b0f9f","Type":"ContainerStarted","Data":"3d6d83b98ad08adcb98d7401f50a996cb924f1f1aacb48164be2598e900df537"} Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:07.210486 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-867d7d6955-qzhwz" podStartSLOduration=4.210458037 podStartE2EDuration="4.210458037s" podCreationTimestamp="2026-02-24 00:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:07.206992674 +0000 UTC m=+324.117855327" watchObservedRunningTime="2026-02-24 00:11:07.210458037 +0000 UTC m=+324.121320660" Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:07.276113 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s6swn" podStartSLOduration=4.592254614 podStartE2EDuration="17.276055713s" podCreationTimestamp="2026-02-24 00:10:50 +0000 UTC" firstStartedPulling="2026-02-24 00:10:54.054192454 +0000 UTC m=+310.965055087" lastFinishedPulling="2026-02-24 00:11:06.737993553 +0000 UTC m=+323.648856186" observedRunningTime="2026-02-24 00:11:07.271674301 +0000 UTC m=+324.182536944" watchObservedRunningTime="2026-02-24 00:11:07.276055713 +0000 UTC m=+324.186918356" Feb 24 00:11:07 crc kubenswrapper[4756]: I0224 00:11:07.840419 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c174fd58-b424-4aed-bfbe-c1f86d864fc7" path="/var/lib/kubelet/pods/c174fd58-b424-4aed-bfbe-c1f86d864fc7/volumes" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.162060 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5449d68948-4hc9r"] Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.163022 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.166452 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.166780 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.166879 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.167019 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.167290 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.168504 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.178145 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-serving-cert\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.178203 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-proxy-ca-bundles\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.178369 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-client-ca\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.178431 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-config\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.178489 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm4t9\" (UniqueName: \"kubernetes.io/projected/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-kube-api-access-fm4t9\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.182531 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5449d68948-4hc9r"] Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.183508 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.280399 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-serving-cert\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.280462 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-proxy-ca-bundles\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.280548 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-client-ca\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.280568 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-config\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.280596 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm4t9\" (UniqueName: \"kubernetes.io/projected/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-kube-api-access-fm4t9\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.281831 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-client-ca\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.282810 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-config\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.283110 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-proxy-ca-bundles\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.288778 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-serving-cert\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.301168 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm4t9\" (UniqueName: \"kubernetes.io/projected/4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8-kube-api-access-fm4t9\") pod \"controller-manager-5449d68948-4hc9r\" (UID: \"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8\") " pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.483996 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.786601 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5449d68948-4hc9r"] Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.888652 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:11:08 crc kubenswrapper[4756]: I0224 00:11:08.945750 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zpsd2" Feb 24 00:11:09 crc kubenswrapper[4756]: I0224 00:11:09.201039 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" event={"ID":"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8","Type":"ContainerStarted","Data":"7458162d296b2c39ff8a34cb4452712fe0c635fc46fcb2ef22db4e221a8c089d"} Feb 24 00:11:09 crc kubenswrapper[4756]: I0224 00:11:09.201112 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" event={"ID":"4a1f7ed1-8972-4bb9-9dfe-ef4068632eb8","Type":"ContainerStarted","Data":"a7626fd9b33533aa24fdd933d92e140506119285aaca2929aea15b7662b87755"} Feb 24 00:11:09 crc kubenswrapper[4756]: I0224 00:11:09.202907 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:09 crc kubenswrapper[4756]: I0224 00:11:09.216269 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" Feb 24 00:11:09 crc kubenswrapper[4756]: I0224 00:11:09.226883 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5449d68948-4hc9r" podStartSLOduration=6.226853856 podStartE2EDuration="6.226853856s" podCreationTimestamp="2026-02-24 00:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:09.222097173 +0000 UTC m=+326.132959806" watchObservedRunningTime="2026-02-24 00:11:09.226853856 +0000 UTC m=+326.137716489" Feb 24 00:11:11 crc kubenswrapper[4756]: I0224 00:11:11.063439 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:11:11 crc kubenswrapper[4756]: I0224 00:11:11.063920 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:11:11 crc kubenswrapper[4756]: I0224 00:11:11.119098 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:11:11 crc kubenswrapper[4756]: I0224 00:11:11.236973 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:11:11 crc kubenswrapper[4756]: I0224 00:11:11.237026 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:11:11 crc kubenswrapper[4756]: I0224 00:11:11.281224 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:11:12 crc kubenswrapper[4756]: I0224 00:11:12.269129 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s6swn" Feb 24 00:11:12 crc kubenswrapper[4756]: I0224 00:11:12.279455 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2gzm" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.225294 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tdbr9"] Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.227233 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.243685 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tdbr9"] Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.333814 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a377b12-b9e2-484b-8abc-3536083f974a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.333896 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-registry-tls\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.333950 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a377b12-b9e2-484b-8abc-3536083f974a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.333978 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-bound-sa-token\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.334023 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26ph\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-kube-api-access-x26ph\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.334042 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a377b12-b9e2-484b-8abc-3536083f974a-trusted-ca\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.334103 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.334125 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a377b12-b9e2-484b-8abc-3536083f974a-registry-certificates\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.360751 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.435093 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-registry-tls\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.435192 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a377b12-b9e2-484b-8abc-3536083f974a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.435230 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-bound-sa-token\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.435284 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26ph\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-kube-api-access-x26ph\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.435311 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a377b12-b9e2-484b-8abc-3536083f974a-trusted-ca\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.435359 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a377b12-b9e2-484b-8abc-3536083f974a-registry-certificates\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.435395 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a377b12-b9e2-484b-8abc-3536083f974a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.436015 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1a377b12-b9e2-484b-8abc-3536083f974a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.437042 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1a377b12-b9e2-484b-8abc-3536083f974a-trusted-ca\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.437394 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1a377b12-b9e2-484b-8abc-3536083f974a-registry-certificates\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.441411 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1a377b12-b9e2-484b-8abc-3536083f974a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.441530 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-registry-tls\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.454974 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-bound-sa-token\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.455348 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26ph\" (UniqueName: \"kubernetes.io/projected/1a377b12-b9e2-484b-8abc-3536083f974a-kube-api-access-x26ph\") pod \"image-registry-66df7c8f76-tdbr9\" (UID: \"1a377b12-b9e2-484b-8abc-3536083f974a\") " pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.552871 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:38 crc kubenswrapper[4756]: I0224 00:11:38.980364 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tdbr9"] Feb 24 00:11:39 crc kubenswrapper[4756]: I0224 00:11:39.406020 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" event={"ID":"1a377b12-b9e2-484b-8abc-3536083f974a","Type":"ContainerStarted","Data":"251c22cdecc6958c4a47ceea933460313653be72545cd32164bd6bb88aa98517"} Feb 24 00:11:39 crc kubenswrapper[4756]: I0224 00:11:39.407422 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" event={"ID":"1a377b12-b9e2-484b-8abc-3536083f974a","Type":"ContainerStarted","Data":"6fb7edc074b262ab1e55d58df53ce78b21c5c9f19cf6eb76e743035f82b59d01"} Feb 24 00:11:39 crc kubenswrapper[4756]: I0224 00:11:39.407570 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:58 crc kubenswrapper[4756]: I0224 00:11:58.559786 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" Feb 24 00:11:58 crc kubenswrapper[4756]: I0224 00:11:58.590642 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tdbr9" podStartSLOduration=20.590603611 podStartE2EDuration="20.590603611s" podCreationTimestamp="2026-02-24 00:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:11:39.43053905 +0000 UTC m=+356.341401703" watchObservedRunningTime="2026-02-24 00:11:58.590603611 +0000 UTC m=+375.501466284" Feb 24 00:11:58 crc kubenswrapper[4756]: I0224 00:11:58.637528 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-689hw"] Feb 24 00:12:22 crc kubenswrapper[4756]: I0224 00:12:22.711793 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:12:22 crc kubenswrapper[4756]: I0224 00:12:22.712601 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:12:23 crc kubenswrapper[4756]: I0224 00:12:23.699184 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" podUID="1edac0e9-6c20-41fe-83ad-6ade8001a0b9" containerName="registry" containerID="cri-o://368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f" gracePeriod=30 Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.039424 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.143305 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-ca-trust-extracted\") pod \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.143404 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-installation-pull-secrets\") pod \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.143425 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-certificates\") pod \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.143442 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-tls\") pod \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.143705 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.143862 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6gqs\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-kube-api-access-j6gqs\") pod \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.143977 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-bound-sa-token\") pod \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.144044 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-trusted-ca\") pod \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\" (UID: \"1edac0e9-6c20-41fe-83ad-6ade8001a0b9\") " Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.144919 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "1edac0e9-6c20-41fe-83ad-6ade8001a0b9" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.145040 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "1edac0e9-6c20-41fe-83ad-6ade8001a0b9" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.151515 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "1edac0e9-6c20-41fe-83ad-6ade8001a0b9" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.151851 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "1edac0e9-6c20-41fe-83ad-6ade8001a0b9" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.153514 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "1edac0e9-6c20-41fe-83ad-6ade8001a0b9" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.160207 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "1edac0e9-6c20-41fe-83ad-6ade8001a0b9" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.160687 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "1edac0e9-6c20-41fe-83ad-6ade8001a0b9" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.162315 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-kube-api-access-j6gqs" (OuterVolumeSpecName: "kube-api-access-j6gqs") pod "1edac0e9-6c20-41fe-83ad-6ade8001a0b9" (UID: "1edac0e9-6c20-41fe-83ad-6ade8001a0b9"). InnerVolumeSpecName "kube-api-access-j6gqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.245895 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6gqs\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-kube-api-access-j6gqs\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.245951 4756 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.245965 4756 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.245982 4756 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.245996 4756 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.246007 4756 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.246019 4756 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1edac0e9-6c20-41fe-83ad-6ade8001a0b9-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.735481 4756 generic.go:334] "Generic (PLEG): container finished" podID="1edac0e9-6c20-41fe-83ad-6ade8001a0b9" containerID="368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f" exitCode=0 Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.735540 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" event={"ID":"1edac0e9-6c20-41fe-83ad-6ade8001a0b9","Type":"ContainerDied","Data":"368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f"} Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.735585 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" event={"ID":"1edac0e9-6c20-41fe-83ad-6ade8001a0b9","Type":"ContainerDied","Data":"a9ab4ac8eaa863c2c058cdf24dd900a2ed332b6141101149598ab8b9aa41643f"} Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.735607 4756 scope.go:117] "RemoveContainer" containerID="368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.735754 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-689hw" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.760787 4756 scope.go:117] "RemoveContainer" containerID="368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.766146 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-689hw"] Feb 24 00:12:24 crc kubenswrapper[4756]: E0224 00:12:24.766506 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f\": container with ID starting with 368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f not found: ID does not exist" containerID="368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.766561 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f"} err="failed to get container status \"368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f\": rpc error: code = NotFound desc = could not find container \"368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f\": container with ID starting with 368677ab23f3e0ece7abf4a5c1b8320c66522f5df89778b91779e28339a7dc8f not found: ID does not exist" Feb 24 00:12:24 crc kubenswrapper[4756]: I0224 00:12:24.769714 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-689hw"] Feb 24 00:12:25 crc kubenswrapper[4756]: I0224 00:12:25.841330 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1edac0e9-6c20-41fe-83ad-6ade8001a0b9" path="/var/lib/kubelet/pods/1edac0e9-6c20-41fe-83ad-6ade8001a0b9/volumes" Feb 24 00:12:52 crc kubenswrapper[4756]: I0224 00:12:52.711513 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:12:52 crc kubenswrapper[4756]: I0224 00:12:52.712283 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:13:22 crc kubenswrapper[4756]: I0224 00:13:22.711310 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:13:22 crc kubenswrapper[4756]: I0224 00:13:22.712232 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:13:22 crc kubenswrapper[4756]: I0224 00:13:22.712329 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:13:22 crc kubenswrapper[4756]: I0224 00:13:22.713470 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3462137efdd08477626614f3719291731120c43a9269032f0fe7f282c877172c"} pod="openshift-machine-config-operator/machine-config-daemon-qb88h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:13:22 crc kubenswrapper[4756]: I0224 00:13:22.713582 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" containerID="cri-o://3462137efdd08477626614f3719291731120c43a9269032f0fe7f282c877172c" gracePeriod=600 Feb 24 00:13:23 crc kubenswrapper[4756]: I0224 00:13:23.128566 4756 generic.go:334] "Generic (PLEG): container finished" podID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerID="3462137efdd08477626614f3719291731120c43a9269032f0fe7f282c877172c" exitCode=0 Feb 24 00:13:23 crc kubenswrapper[4756]: I0224 00:13:23.128722 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerDied","Data":"3462137efdd08477626614f3719291731120c43a9269032f0fe7f282c877172c"} Feb 24 00:13:23 crc kubenswrapper[4756]: I0224 00:13:23.129053 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerStarted","Data":"a250b84fb5cf3d50f1c8f73096c01a386ce6306ff496adf3c239fdd204f80ec3"} Feb 24 00:13:23 crc kubenswrapper[4756]: I0224 00:13:23.129152 4756 scope.go:117] "RemoveContainer" containerID="1ebd0b9f29b8b8c5f0672acc442406de9bfeec0c1a9267bfca89268d8bf6ee7c" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.306901 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dvdfz"] Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.313676 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovn-controller" containerID="cri-o://565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435" gracePeriod=30 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.314099 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="northd" containerID="cri-o://06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7" gracePeriod=30 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.314318 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d" gracePeriod=30 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.314505 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kube-rbac-proxy-node" containerID="cri-o://c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5" gracePeriod=30 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.314689 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovn-acl-logging" containerID="cri-o://9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5" gracePeriod=30 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.313776 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="nbdb" containerID="cri-o://dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0" gracePeriod=30 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.314774 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="sbdb" containerID="cri-o://680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f" gracePeriod=30 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.360793 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" containerID="cri-o://1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" gracePeriod=30 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.656866 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovnkube-controller/1.log" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.661792 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovn-acl-logging/0.log" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.662273 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovn-controller/0.log" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.662737 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.713792 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sh5bz"] Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714043 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="northd" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714080 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="northd" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714093 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714100 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714110 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovn-acl-logging" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714118 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovn-acl-logging" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714131 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714138 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714147 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kube-rbac-proxy-node" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714155 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kube-rbac-proxy-node" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714163 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="sbdb" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714168 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="sbdb" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714177 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714183 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714192 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714198 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714205 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="nbdb" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714210 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="nbdb" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714218 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1edac0e9-6c20-41fe-83ad-6ade8001a0b9" containerName="registry" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714225 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1edac0e9-6c20-41fe-83ad-6ade8001a0b9" containerName="registry" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714238 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovn-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714244 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovn-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: E0224 00:14:56.714253 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kubecfg-setup" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714259 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kubecfg-setup" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714349 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="sbdb" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714359 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1edac0e9-6c20-41fe-83ad-6ade8001a0b9" containerName="registry" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714366 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="northd" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714373 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kube-rbac-proxy-ovn-metrics" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714382 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="nbdb" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714388 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="kube-rbac-proxy-node" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714395 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714401 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714409 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovn-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714416 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovn-acl-logging" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.714626 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerName="ovnkube-controller" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.716310 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724418 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-kubelet\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724511 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-script-lib\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724524 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724542 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-node-log\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724608 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724610 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-node-log" (OuterVolumeSpecName: "node-log") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724631 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-var-lib-openvswitch\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724655 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-env-overrides\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724658 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724673 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-slash\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724698 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-config\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724715 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-bin\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724733 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-netd\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724746 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-openvswitch\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724769 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-log-socket\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724694 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724716 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-slash" (OuterVolumeSpecName: "host-slash") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724826 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724843 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724875 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-netns\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724867 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724919 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-log-socket" (OuterVolumeSpecName: "log-socket") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.724981 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29053aeb-7913-4b4d-94dd-503af8e1415f-ovn-node-metrics-cert\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725002 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725028 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-systemd-units\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725037 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725090 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-ovn-kubernetes\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725100 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725145 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-ovn\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725175 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-systemd\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725196 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725203 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725209 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-etc-openvswitch\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725228 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725204 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725271 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725280 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvfww\" (UniqueName: \"kubernetes.io/projected/29053aeb-7913-4b4d-94dd-503af8e1415f-kube-api-access-lvfww\") pod \"29053aeb-7913-4b4d-94dd-503af8e1415f\" (UID: \"29053aeb-7913-4b4d-94dd-503af8e1415f\") " Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725536 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-kubelet\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725572 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-run-ovn-kubernetes\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725604 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-etc-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725631 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-log-socket\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725705 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-var-lib-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725745 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-cni-bin\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725769 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovnkube-config\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725797 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-env-overrides\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725841 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725904 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-node-log\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725942 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-run-netns\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.725983 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-systemd\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726098 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-ovn\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726152 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726191 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovnkube-script-lib\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726224 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-systemd-units\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726259 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-slash\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726283 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovn-node-metrics-cert\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726314 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-cni-netd\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726364 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbc8\" (UniqueName: \"kubernetes.io/projected/1dd1d059-8b01-472d-a041-7da76fc3a1df-kube-api-access-7cbc8\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726446 4756 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-log-socket\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726469 4756 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726488 4756 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726539 4756 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726558 4756 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726577 4756 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726594 4756 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726625 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726647 4756 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-node-log\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726673 4756 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726686 4756 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726696 4756 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726705 4756 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-slash\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726715 4756 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29053aeb-7913-4b4d-94dd-503af8e1415f-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726724 4756 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726733 4756 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.726742 4756 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.731519 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29053aeb-7913-4b4d-94dd-503af8e1415f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.731684 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29053aeb-7913-4b4d-94dd-503af8e1415f-kube-api-access-lvfww" (OuterVolumeSpecName: "kube-api-access-lvfww") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "kube-api-access-lvfww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.740594 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "29053aeb-7913-4b4d-94dd-503af8e1415f" (UID: "29053aeb-7913-4b4d-94dd-503af8e1415f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.786929 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovnkube-controller/1.log" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.789191 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovn-acl-logging/0.log" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.789609 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dvdfz_29053aeb-7913-4b4d-94dd-503af8e1415f/ovn-controller/0.log" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.789971 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" exitCode=0 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.789994 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f" exitCode=0 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790002 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0" exitCode=0 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790008 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7" exitCode=0 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790016 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d" exitCode=0 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790023 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5" exitCode=0 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790034 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5" exitCode=143 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790047 4756 generic.go:334] "Generic (PLEG): container finished" podID="29053aeb-7913-4b4d-94dd-503af8e1415f" containerID="565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435" exitCode=143 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790133 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790219 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790248 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790166 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790265 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790282 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790300 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790323 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790338 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790347 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790356 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790364 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790378 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790386 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790395 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790405 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790417 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790430 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790440 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790449 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790456 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790464 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790472 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790480 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790488 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790495 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790503 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790513 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790528 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790555 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790563 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790605 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790613 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790621 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790629 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790637 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790645 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790652 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790663 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dvdfz" event={"ID":"29053aeb-7913-4b4d-94dd-503af8e1415f","Type":"ContainerDied","Data":"4691a61cd508cb5a3eb752efbf7555f5e60213b560da783a0f0faf6fe06b16ce"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790674 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790683 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790692 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790699 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790707 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790714 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790722 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790730 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790739 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790749 4756 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.790775 4756 scope.go:117] "RemoveContainer" containerID="1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.793571 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5xm6s_9de3ae24-6a68-4d42-bb86-f3d22a6b651a/kube-multus/0.log" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.793707 4756 generic.go:334] "Generic (PLEG): container finished" podID="9de3ae24-6a68-4d42-bb86-f3d22a6b651a" containerID="45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa" exitCode=2 Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.793765 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xm6s" event={"ID":"9de3ae24-6a68-4d42-bb86-f3d22a6b651a","Type":"ContainerDied","Data":"45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa"} Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.797742 4756 scope.go:117] "RemoveContainer" containerID="45e26bfe82bffd0f1548a12d8138757a37286dcbeb74753dcedb81d4205998fa" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.812197 4756 scope.go:117] "RemoveContainer" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.827213 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-node-log\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.827272 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-run-netns\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.827317 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-systemd\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.827326 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-node-log\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.827363 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-ovn\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.827418 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-systemd\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.827476 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-run-netns\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.827609 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-ovn\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.828622 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.828206 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829395 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovnkube-script-lib\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829429 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-systemd-units\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829460 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-slash\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829486 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovn-node-metrics-cert\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829512 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-cni-netd\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829535 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-systemd-units\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829563 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cbc8\" (UniqueName: \"kubernetes.io/projected/1dd1d059-8b01-472d-a041-7da76fc3a1df-kube-api-access-7cbc8\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829594 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-kubelet\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829620 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-run-ovn-kubernetes\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829653 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-etc-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829677 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-log-socket\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829728 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-var-lib-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829777 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-cni-bin\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829806 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovnkube-config\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829860 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-run-ovn-kubernetes\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829866 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-env-overrides\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.829937 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830022 4756 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29053aeb-7913-4b4d-94dd-503af8e1415f-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830035 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-etc-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830041 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-cni-netd\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830047 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvfww\" (UniqueName: \"kubernetes.io/projected/29053aeb-7913-4b4d-94dd-503af8e1415f-kube-api-access-lvfww\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830210 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-log-socket\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830215 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-cni-bin\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830264 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-kubelet\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830277 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-host-slash\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830445 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-env-overrides\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.830089 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-run-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.831243 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovnkube-config\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.831247 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovnkube-script-lib\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.831324 4756 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29053aeb-7913-4b4d-94dd-503af8e1415f-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.832224 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dd1d059-8b01-472d-a041-7da76fc3a1df-var-lib-openvswitch\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.834942 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1dd1d059-8b01-472d-a041-7da76fc3a1df-ovn-node-metrics-cert\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.840169 4756 scope.go:117] "RemoveContainer" containerID="680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.856661 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cbc8\" (UniqueName: \"kubernetes.io/projected/1dd1d059-8b01-472d-a041-7da76fc3a1df-kube-api-access-7cbc8\") pod \"ovnkube-node-sh5bz\" (UID: \"1dd1d059-8b01-472d-a041-7da76fc3a1df\") " pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.861234 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dvdfz"] Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.866394 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dvdfz"] Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.868502 4756 scope.go:117] "RemoveContainer" containerID="dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.882982 4756 scope.go:117] "RemoveContainer" containerID="06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.915339 4756 scope.go:117] "RemoveContainer" containerID="52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.929695 4756 scope.go:117] "RemoveContainer" containerID="c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.948280 4756 scope.go:117] "RemoveContainer" containerID="9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.968429 4756 scope.go:117] "RemoveContainer" containerID="565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435" Feb 24 00:14:56 crc kubenswrapper[4756]: I0224 00:14:56.988202 4756 scope.go:117] "RemoveContainer" containerID="5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.004560 4756 scope.go:117] "RemoveContainer" containerID="1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.005549 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": container with ID starting with 1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f not found: ID does not exist" containerID="1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.005607 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} err="failed to get container status \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": rpc error: code = NotFound desc = could not find container \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": container with ID starting with 1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.005647 4756 scope.go:117] "RemoveContainer" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.006125 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": container with ID starting with 7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae not found: ID does not exist" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.006167 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} err="failed to get container status \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": rpc error: code = NotFound desc = could not find container \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": container with ID starting with 7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.006200 4756 scope.go:117] "RemoveContainer" containerID="680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.006529 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": container with ID starting with 680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f not found: ID does not exist" containerID="680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.006560 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} err="failed to get container status \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": rpc error: code = NotFound desc = could not find container \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": container with ID starting with 680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.006577 4756 scope.go:117] "RemoveContainer" containerID="dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.006941 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": container with ID starting with dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0 not found: ID does not exist" containerID="dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.006995 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} err="failed to get container status \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": rpc error: code = NotFound desc = could not find container \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": container with ID starting with dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.007041 4756 scope.go:117] "RemoveContainer" containerID="06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.007511 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": container with ID starting with 06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7 not found: ID does not exist" containerID="06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.007541 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} err="failed to get container status \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": rpc error: code = NotFound desc = could not find container \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": container with ID starting with 06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.007557 4756 scope.go:117] "RemoveContainer" containerID="52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.007845 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": container with ID starting with 52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d not found: ID does not exist" containerID="52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.007868 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} err="failed to get container status \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": rpc error: code = NotFound desc = could not find container \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": container with ID starting with 52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.007881 4756 scope.go:117] "RemoveContainer" containerID="c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.008175 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": container with ID starting with c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5 not found: ID does not exist" containerID="c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.008214 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} err="failed to get container status \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": rpc error: code = NotFound desc = could not find container \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": container with ID starting with c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.008235 4756 scope.go:117] "RemoveContainer" containerID="9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.008517 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": container with ID starting with 9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5 not found: ID does not exist" containerID="9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.008543 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} err="failed to get container status \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": rpc error: code = NotFound desc = could not find container \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": container with ID starting with 9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.008559 4756 scope.go:117] "RemoveContainer" containerID="565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.009177 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": container with ID starting with 565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435 not found: ID does not exist" containerID="565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.009201 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} err="failed to get container status \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": rpc error: code = NotFound desc = could not find container \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": container with ID starting with 565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.009213 4756 scope.go:117] "RemoveContainer" containerID="5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb" Feb 24 00:14:57 crc kubenswrapper[4756]: E0224 00:14:57.010156 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": container with ID starting with 5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb not found: ID does not exist" containerID="5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.010190 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} err="failed to get container status \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": rpc error: code = NotFound desc = could not find container \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": container with ID starting with 5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.010211 4756 scope.go:117] "RemoveContainer" containerID="1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.010468 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} err="failed to get container status \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": rpc error: code = NotFound desc = could not find container \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": container with ID starting with 1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.010493 4756 scope.go:117] "RemoveContainer" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.010747 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} err="failed to get container status \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": rpc error: code = NotFound desc = could not find container \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": container with ID starting with 7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.010772 4756 scope.go:117] "RemoveContainer" containerID="680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.011242 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} err="failed to get container status \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": rpc error: code = NotFound desc = could not find container \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": container with ID starting with 680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.011278 4756 scope.go:117] "RemoveContainer" containerID="dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.011572 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} err="failed to get container status \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": rpc error: code = NotFound desc = could not find container \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": container with ID starting with dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.011594 4756 scope.go:117] "RemoveContainer" containerID="06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.012036 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} err="failed to get container status \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": rpc error: code = NotFound desc = could not find container \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": container with ID starting with 06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.012115 4756 scope.go:117] "RemoveContainer" containerID="52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.012454 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} err="failed to get container status \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": rpc error: code = NotFound desc = could not find container \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": container with ID starting with 52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.012482 4756 scope.go:117] "RemoveContainer" containerID="c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.012830 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} err="failed to get container status \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": rpc error: code = NotFound desc = could not find container \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": container with ID starting with c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.012852 4756 scope.go:117] "RemoveContainer" containerID="9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.013201 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} err="failed to get container status \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": rpc error: code = NotFound desc = could not find container \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": container with ID starting with 9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.013225 4756 scope.go:117] "RemoveContainer" containerID="565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.013785 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} err="failed to get container status \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": rpc error: code = NotFound desc = could not find container \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": container with ID starting with 565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.013815 4756 scope.go:117] "RemoveContainer" containerID="5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.014159 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} err="failed to get container status \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": rpc error: code = NotFound desc = could not find container \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": container with ID starting with 5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.014184 4756 scope.go:117] "RemoveContainer" containerID="1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.014484 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} err="failed to get container status \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": rpc error: code = NotFound desc = could not find container \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": container with ID starting with 1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.014516 4756 scope.go:117] "RemoveContainer" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.014796 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} err="failed to get container status \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": rpc error: code = NotFound desc = could not find container \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": container with ID starting with 7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.014874 4756 scope.go:117] "RemoveContainer" containerID="680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.015181 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} err="failed to get container status \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": rpc error: code = NotFound desc = could not find container \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": container with ID starting with 680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.015207 4756 scope.go:117] "RemoveContainer" containerID="dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.015454 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} err="failed to get container status \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": rpc error: code = NotFound desc = could not find container \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": container with ID starting with dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.015474 4756 scope.go:117] "RemoveContainer" containerID="06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.015784 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} err="failed to get container status \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": rpc error: code = NotFound desc = could not find container \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": container with ID starting with 06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.015803 4756 scope.go:117] "RemoveContainer" containerID="52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.016042 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} err="failed to get container status \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": rpc error: code = NotFound desc = could not find container \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": container with ID starting with 52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.016094 4756 scope.go:117] "RemoveContainer" containerID="c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.016414 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} err="failed to get container status \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": rpc error: code = NotFound desc = could not find container \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": container with ID starting with c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.016455 4756 scope.go:117] "RemoveContainer" containerID="9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.016803 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} err="failed to get container status \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": rpc error: code = NotFound desc = could not find container \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": container with ID starting with 9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.016827 4756 scope.go:117] "RemoveContainer" containerID="565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.017116 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} err="failed to get container status \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": rpc error: code = NotFound desc = could not find container \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": container with ID starting with 565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.017144 4756 scope.go:117] "RemoveContainer" containerID="5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.017405 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} err="failed to get container status \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": rpc error: code = NotFound desc = could not find container \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": container with ID starting with 5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.017435 4756 scope.go:117] "RemoveContainer" containerID="1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.017741 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} err="failed to get container status \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": rpc error: code = NotFound desc = could not find container \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": container with ID starting with 1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.017776 4756 scope.go:117] "RemoveContainer" containerID="7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.018144 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae"} err="failed to get container status \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": rpc error: code = NotFound desc = could not find container \"7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae\": container with ID starting with 7302ae68a44ef98d667e70f3c3a79fca38ef8cd4722cb8fe8b248499c22f15ae not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.018167 4756 scope.go:117] "RemoveContainer" containerID="680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.018424 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f"} err="failed to get container status \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": rpc error: code = NotFound desc = could not find container \"680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f\": container with ID starting with 680a8a34f34b337cc5208ab4169f755090c8c7eed4debb23b1a5825b20727d0f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.018446 4756 scope.go:117] "RemoveContainer" containerID="dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.018819 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0"} err="failed to get container status \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": rpc error: code = NotFound desc = could not find container \"dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0\": container with ID starting with dd4865ef2cdff3de4e1d0241aeca962c15dcb4ad924f35c7b7f86dd314c881b0 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.018857 4756 scope.go:117] "RemoveContainer" containerID="06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.019237 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7"} err="failed to get container status \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": rpc error: code = NotFound desc = could not find container \"06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7\": container with ID starting with 06f398daa988a797dac93a1813bbdc3b36baef07a4743f452b27737408fb60f7 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.019257 4756 scope.go:117] "RemoveContainer" containerID="52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.019613 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d"} err="failed to get container status \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": rpc error: code = NotFound desc = could not find container \"52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d\": container with ID starting with 52e1788686e8cfac0247f45e0c7587b8330cadf281cd5b93da08286b8004fd1d not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.019633 4756 scope.go:117] "RemoveContainer" containerID="c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.019923 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5"} err="failed to get container status \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": rpc error: code = NotFound desc = could not find container \"c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5\": container with ID starting with c3bbcfc1cc47886f24c334ac480c1451748a670251070fbf03e2c0f16aa389c5 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.019944 4756 scope.go:117] "RemoveContainer" containerID="9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.020239 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5"} err="failed to get container status \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": rpc error: code = NotFound desc = could not find container \"9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5\": container with ID starting with 9d24ef32d7398acedcff63b008f13b9091d220e4c357202765a36a3146d436b5 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.020269 4756 scope.go:117] "RemoveContainer" containerID="565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.021046 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435"} err="failed to get container status \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": rpc error: code = NotFound desc = could not find container \"565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435\": container with ID starting with 565151817daceaab16481bf3189f792a2e1abcbb79ceb2302f575619b03d7435 not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.021312 4756 scope.go:117] "RemoveContainer" containerID="5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.021744 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb"} err="failed to get container status \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": rpc error: code = NotFound desc = could not find container \"5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb\": container with ID starting with 5bc85ebf0f4f39f3cb2b031b4108418797c134d327cb12786fdb97f5e35a3ddb not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.021766 4756 scope.go:117] "RemoveContainer" containerID="1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.022211 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f"} err="failed to get container status \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": rpc error: code = NotFound desc = could not find container \"1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f\": container with ID starting with 1db1359a011178bb040a39fe002c74a318e01c408f6a8f82574d0400f934665f not found: ID does not exist" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.031947 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:14:57 crc kubenswrapper[4756]: W0224 00:14:57.052466 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dd1d059_8b01_472d_a041_7da76fc3a1df.slice/crio-7cea48b16f6dd3dd89a39a3e7d000b6f196869ab12ee13fb2bf84310fe3f5000 WatchSource:0}: Error finding container 7cea48b16f6dd3dd89a39a3e7d000b6f196869ab12ee13fb2bf84310fe3f5000: Status 404 returned error can't find the container with id 7cea48b16f6dd3dd89a39a3e7d000b6f196869ab12ee13fb2bf84310fe3f5000 Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.803174 4756 generic.go:334] "Generic (PLEG): container finished" podID="1dd1d059-8b01-472d-a041-7da76fc3a1df" containerID="e32dc80b1549c165ead2b911912d5928aa59eed1c638a5d62a4dd6c466d0bc3a" exitCode=0 Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.803279 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerDied","Data":"e32dc80b1549c165ead2b911912d5928aa59eed1c638a5d62a4dd6c466d0bc3a"} Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.805153 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"7cea48b16f6dd3dd89a39a3e7d000b6f196869ab12ee13fb2bf84310fe3f5000"} Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.809400 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5xm6s_9de3ae24-6a68-4d42-bb86-f3d22a6b651a/kube-multus/0.log" Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.809450 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5xm6s" event={"ID":"9de3ae24-6a68-4d42-bb86-f3d22a6b651a","Type":"ContainerStarted","Data":"a3e81b3ca45998e1fac855b288f2f69771d91b4d57ec7ff49013d79e66223e69"} Feb 24 00:14:57 crc kubenswrapper[4756]: I0224 00:14:57.844423 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29053aeb-7913-4b4d-94dd-503af8e1415f" path="/var/lib/kubelet/pods/29053aeb-7913-4b4d-94dd-503af8e1415f/volumes" Feb 24 00:14:58 crc kubenswrapper[4756]: I0224 00:14:58.820878 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"c888faabd26a898ac2a4c59646effe8e427fef593710ec61994b978ca3b3db07"} Feb 24 00:14:58 crc kubenswrapper[4756]: I0224 00:14:58.821504 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"2b964494abd0eafdf2b34938cd3db2b46c246457fdde97d8fc0b1c9865b29e76"} Feb 24 00:14:58 crc kubenswrapper[4756]: I0224 00:14:58.821533 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"0934cf01b791cd084732c0d44f765af6c07d5f8e7205b7d3fef3de0eadfae9af"} Feb 24 00:14:58 crc kubenswrapper[4756]: I0224 00:14:58.821560 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"ffb389311231977c3817058ca5c46584da19f82ac7ed7ccc21f8a3fc0ef4502f"} Feb 24 00:14:58 crc kubenswrapper[4756]: I0224 00:14:58.821580 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"aac47ff0139224bf2eb3e4bd50e35b26f0e5abd8eb9c9e960fbf0a589f18cab2"} Feb 24 00:14:58 crc kubenswrapper[4756]: I0224 00:14:58.821599 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"d69afd7c1cd004ead2b4ad6bcd2c0e2ec080cec883ad2f81e5f4af859cb64a72"} Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.212498 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn"] Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.214238 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.217039 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.217050 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.282010 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-config-volume\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.282059 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-secret-volume\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.282275 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkt2\" (UniqueName: \"kubernetes.io/projected/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-kube-api-access-2qkt2\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.383997 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkt2\" (UniqueName: \"kubernetes.io/projected/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-kube-api-access-2qkt2\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.384130 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-config-volume\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.384171 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-secret-volume\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.385141 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-config-volume\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.390161 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-secret-volume\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.405197 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkt2\" (UniqueName: \"kubernetes.io/projected/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-kube-api-access-2qkt2\") pod \"collect-profiles-29531535-c6dvn\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: I0224 00:15:00.529188 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: E0224 00:15:00.556879 4756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager_b943ae93-380c-4c31-aaa1-0ab4bf6b0d71_0(82cf0d80e8d539aa1385e6af71fe39bcc0dab75361a17fbdcf5530497a7e7b0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:15:00 crc kubenswrapper[4756]: E0224 00:15:00.556971 4756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager_b943ae93-380c-4c31-aaa1-0ab4bf6b0d71_0(82cf0d80e8d539aa1385e6af71fe39bcc0dab75361a17fbdcf5530497a7e7b0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: E0224 00:15:00.556994 4756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager_b943ae93-380c-4c31-aaa1-0ab4bf6b0d71_0(82cf0d80e8d539aa1385e6af71fe39bcc0dab75361a17fbdcf5530497a7e7b0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:00 crc kubenswrapper[4756]: E0224 00:15:00.557047 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager(b943ae93-380c-4c31-aaa1-0ab4bf6b0d71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager(b943ae93-380c-4c31-aaa1-0ab4bf6b0d71)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager_b943ae93-380c-4c31-aaa1-0ab4bf6b0d71_0(82cf0d80e8d539aa1385e6af71fe39bcc0dab75361a17fbdcf5530497a7e7b0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" podUID="b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" Feb 24 00:15:01 crc kubenswrapper[4756]: I0224 00:15:01.846344 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"610a54ac7718a27c2f70dbee860aa89bbccc85299ae63aa60599e676a530c31a"} Feb 24 00:15:03 crc kubenswrapper[4756]: I0224 00:15:03.863940 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" event={"ID":"1dd1d059-8b01-472d-a041-7da76fc3a1df","Type":"ContainerStarted","Data":"58fd6aca47d8db01a31400f87a1820e92036b4562bf3aef6159cfd26d8709f00"} Feb 24 00:15:03 crc kubenswrapper[4756]: I0224 00:15:03.865259 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:15:03 crc kubenswrapper[4756]: I0224 00:15:03.865301 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:15:03 crc kubenswrapper[4756]: I0224 00:15:03.865350 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:15:03 crc kubenswrapper[4756]: I0224 00:15:03.900097 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:15:03 crc kubenswrapper[4756]: I0224 00:15:03.904876 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" podStartSLOduration=7.904856551 podStartE2EDuration="7.904856551s" podCreationTimestamp="2026-02-24 00:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:15:03.900632265 +0000 UTC m=+560.811494938" watchObservedRunningTime="2026-02-24 00:15:03.904856551 +0000 UTC m=+560.815719214" Feb 24 00:15:03 crc kubenswrapper[4756]: I0224 00:15:03.915602 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:15:04 crc kubenswrapper[4756]: I0224 00:15:04.415314 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn"] Feb 24 00:15:04 crc kubenswrapper[4756]: I0224 00:15:04.415489 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:04 crc kubenswrapper[4756]: I0224 00:15:04.416045 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:04 crc kubenswrapper[4756]: E0224 00:15:04.445598 4756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager_b943ae93-380c-4c31-aaa1-0ab4bf6b0d71_0(57eb0d931e26d0743b00d562376a30c1410db9f82354452a6e003e60e329f027): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 24 00:15:04 crc kubenswrapper[4756]: E0224 00:15:04.445681 4756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager_b943ae93-380c-4c31-aaa1-0ab4bf6b0d71_0(57eb0d931e26d0743b00d562376a30c1410db9f82354452a6e003e60e329f027): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:04 crc kubenswrapper[4756]: E0224 00:15:04.445709 4756 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager_b943ae93-380c-4c31-aaa1-0ab4bf6b0d71_0(57eb0d931e26d0743b00d562376a30c1410db9f82354452a6e003e60e329f027): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:04 crc kubenswrapper[4756]: E0224 00:15:04.445775 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager(b943ae93-380c-4c31-aaa1-0ab4bf6b0d71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager(b943ae93-380c-4c31-aaa1-0ab4bf6b0d71)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-29531535-c6dvn_openshift-operator-lifecycle-manager_b943ae93-380c-4c31-aaa1-0ab4bf6b0d71_0(57eb0d931e26d0743b00d562376a30c1410db9f82354452a6e003e60e329f027): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" podUID="b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" Feb 24 00:15:18 crc kubenswrapper[4756]: I0224 00:15:18.832534 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:18 crc kubenswrapper[4756]: I0224 00:15:18.833543 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:19 crc kubenswrapper[4756]: I0224 00:15:19.053656 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn"] Feb 24 00:15:19 crc kubenswrapper[4756]: W0224 00:15:19.062159 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb943ae93_380c_4c31_aaa1_0ab4bf6b0d71.slice/crio-305063390c7ab3c9bac717faa2ca12226b04d499e1e4375ffc722fcd76cc547a WatchSource:0}: Error finding container 305063390c7ab3c9bac717faa2ca12226b04d499e1e4375ffc722fcd76cc547a: Status 404 returned error can't find the container with id 305063390c7ab3c9bac717faa2ca12226b04d499e1e4375ffc722fcd76cc547a Feb 24 00:15:19 crc kubenswrapper[4756]: I0224 00:15:19.970670 4756 generic.go:334] "Generic (PLEG): container finished" podID="b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" containerID="d5ad679d08384f524c3b500422683344ffd4c141d6bbeb501f00129bc07e5b15" exitCode=0 Feb 24 00:15:19 crc kubenswrapper[4756]: I0224 00:15:19.971028 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" event={"ID":"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71","Type":"ContainerDied","Data":"d5ad679d08384f524c3b500422683344ffd4c141d6bbeb501f00129bc07e5b15"} Feb 24 00:15:19 crc kubenswrapper[4756]: I0224 00:15:19.971120 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" event={"ID":"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71","Type":"ContainerStarted","Data":"305063390c7ab3c9bac717faa2ca12226b04d499e1e4375ffc722fcd76cc547a"} Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.239215 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.320501 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-secret-volume\") pod \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.320695 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-config-volume\") pod \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.320757 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qkt2\" (UniqueName: \"kubernetes.io/projected/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-kube-api-access-2qkt2\") pod \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\" (UID: \"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71\") " Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.321641 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-config-volume" (OuterVolumeSpecName: "config-volume") pod "b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" (UID: "b943ae93-380c-4c31-aaa1-0ab4bf6b0d71"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.333412 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-kube-api-access-2qkt2" (OuterVolumeSpecName: "kube-api-access-2qkt2") pod "b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" (UID: "b943ae93-380c-4c31-aaa1-0ab4bf6b0d71"). InnerVolumeSpecName "kube-api-access-2qkt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.333478 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" (UID: "b943ae93-380c-4c31-aaa1-0ab4bf6b0d71"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.422716 4756 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-config-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.422752 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qkt2\" (UniqueName: \"kubernetes.io/projected/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-kube-api-access-2qkt2\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.422767 4756 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b943ae93-380c-4c31-aaa1-0ab4bf6b0d71-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.985548 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" event={"ID":"b943ae93-380c-4c31-aaa1-0ab4bf6b0d71","Type":"ContainerDied","Data":"305063390c7ab3c9bac717faa2ca12226b04d499e1e4375ffc722fcd76cc547a"} Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.985600 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305063390c7ab3c9bac717faa2ca12226b04d499e1e4375ffc722fcd76cc547a" Feb 24 00:15:21 crc kubenswrapper[4756]: I0224 00:15:21.985609 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531535-c6dvn" Feb 24 00:15:22 crc kubenswrapper[4756]: I0224 00:15:22.711096 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:15:22 crc kubenswrapper[4756]: I0224 00:15:22.711187 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:15:27 crc kubenswrapper[4756]: I0224 00:15:27.057269 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sh5bz" Feb 24 00:15:52 crc kubenswrapper[4756]: I0224 00:15:52.710990 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:15:52 crc kubenswrapper[4756]: I0224 00:15:52.711769 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.268570 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwc9"] Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.269567 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9cwc9" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="registry-server" containerID="cri-o://ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c" gracePeriod=30 Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.284865 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dz9sx"] Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.285449 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dz9sx" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerName="registry-server" containerID="cri-o://d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546" gracePeriod=30 Feb 24 00:15:58 crc kubenswrapper[4756]: E0224 00:15:58.645722 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c is running failed: container process not found" containerID="ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 00:15:58 crc kubenswrapper[4756]: E0224 00:15:58.647129 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c is running failed: container process not found" containerID="ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 00:15:58 crc kubenswrapper[4756]: E0224 00:15:58.647674 4756 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c is running failed: container process not found" containerID="ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c" cmd=["grpc_health_probe","-addr=:50051"] Feb 24 00:15:58 crc kubenswrapper[4756]: E0224 00:15:58.647805 4756 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-9cwc9" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="registry-server" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.673257 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.687859 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.763881 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmvwq\" (UniqueName: \"kubernetes.io/projected/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-kube-api-access-nmvwq\") pod \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.763956 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-catalog-content\") pod \"7b06bc00-ef2e-451a-b07d-da301f20df31\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.764118 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-catalog-content\") pod \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.764162 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t8c2\" (UniqueName: \"kubernetes.io/projected/7b06bc00-ef2e-451a-b07d-da301f20df31-kube-api-access-4t8c2\") pod \"7b06bc00-ef2e-451a-b07d-da301f20df31\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.764201 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-utilities\") pod \"7b06bc00-ef2e-451a-b07d-da301f20df31\" (UID: \"7b06bc00-ef2e-451a-b07d-da301f20df31\") " Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.764279 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-utilities\") pod \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\" (UID: \"14871dd9-6e40-4815-96fb-36f1dbbe2a1b\") " Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.765291 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-utilities" (OuterVolumeSpecName: "utilities") pod "7b06bc00-ef2e-451a-b07d-da301f20df31" (UID: "7b06bc00-ef2e-451a-b07d-da301f20df31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.766200 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-utilities" (OuterVolumeSpecName: "utilities") pod "14871dd9-6e40-4815-96fb-36f1dbbe2a1b" (UID: "14871dd9-6e40-4815-96fb-36f1dbbe2a1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.770951 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b06bc00-ef2e-451a-b07d-da301f20df31-kube-api-access-4t8c2" (OuterVolumeSpecName: "kube-api-access-4t8c2") pod "7b06bc00-ef2e-451a-b07d-da301f20df31" (UID: "7b06bc00-ef2e-451a-b07d-da301f20df31"). InnerVolumeSpecName "kube-api-access-4t8c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.771335 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-kube-api-access-nmvwq" (OuterVolumeSpecName: "kube-api-access-nmvwq") pod "14871dd9-6e40-4815-96fb-36f1dbbe2a1b" (UID: "14871dd9-6e40-4815-96fb-36f1dbbe2a1b"). InnerVolumeSpecName "kube-api-access-nmvwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.790745 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b06bc00-ef2e-451a-b07d-da301f20df31" (UID: "7b06bc00-ef2e-451a-b07d-da301f20df31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.799360 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14871dd9-6e40-4815-96fb-36f1dbbe2a1b" (UID: "14871dd9-6e40-4815-96fb-36f1dbbe2a1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.865673 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.865738 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t8c2\" (UniqueName: \"kubernetes.io/projected/7b06bc00-ef2e-451a-b07d-da301f20df31-kube-api-access-4t8c2\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.865760 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.865784 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.865803 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmvwq\" (UniqueName: \"kubernetes.io/projected/14871dd9-6e40-4815-96fb-36f1dbbe2a1b-kube-api-access-nmvwq\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:58 crc kubenswrapper[4756]: I0224 00:15:58.865821 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b06bc00-ef2e-451a-b07d-da301f20df31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.250293 4756 generic.go:334] "Generic (PLEG): container finished" podID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerID="d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546" exitCode=0 Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.250389 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dz9sx" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.250440 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dz9sx" event={"ID":"7b06bc00-ef2e-451a-b07d-da301f20df31","Type":"ContainerDied","Data":"d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546"} Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.251548 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dz9sx" event={"ID":"7b06bc00-ef2e-451a-b07d-da301f20df31","Type":"ContainerDied","Data":"aa496286fa903fea56c066fc7c3d26e2a68f90ba69139b4837f85e32b6faeb97"} Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.251577 4756 scope.go:117] "RemoveContainer" containerID="d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.254620 4756 generic.go:334] "Generic (PLEG): container finished" podID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerID="ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c" exitCode=0 Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.254686 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwc9" event={"ID":"14871dd9-6e40-4815-96fb-36f1dbbe2a1b","Type":"ContainerDied","Data":"ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c"} Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.254727 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cwc9" event={"ID":"14871dd9-6e40-4815-96fb-36f1dbbe2a1b","Type":"ContainerDied","Data":"93c6bad91fa4a64c89b24d39567acf5dd14633c756a72e72b02f1c608115d60f"} Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.254795 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cwc9" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.272103 4756 scope.go:117] "RemoveContainer" containerID="8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.291920 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwc9"] Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.293307 4756 scope.go:117] "RemoveContainer" containerID="5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.300552 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cwc9"] Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.318886 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dz9sx"] Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.323560 4756 scope.go:117] "RemoveContainer" containerID="d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546" Feb 24 00:15:59 crc kubenswrapper[4756]: E0224 00:15:59.324319 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546\": container with ID starting with d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546 not found: ID does not exist" containerID="d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.324377 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546"} err="failed to get container status \"d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546\": rpc error: code = NotFound desc = could not find container \"d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546\": container with ID starting with d6df06760c091775afb712d15cadace01104b87cfdfa0ddc03884235a4e6c546 not found: ID does not exist" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.324417 4756 scope.go:117] "RemoveContainer" containerID="8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.324495 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dz9sx"] Feb 24 00:15:59 crc kubenswrapper[4756]: E0224 00:15:59.325051 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e\": container with ID starting with 8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e not found: ID does not exist" containerID="8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.325139 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e"} err="failed to get container status \"8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e\": rpc error: code = NotFound desc = could not find container \"8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e\": container with ID starting with 8fae3dbc136ff820b99bdd537fc947794a1d6539134670b2b053aec8b243088e not found: ID does not exist" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.325183 4756 scope.go:117] "RemoveContainer" containerID="5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67" Feb 24 00:15:59 crc kubenswrapper[4756]: E0224 00:15:59.327299 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67\": container with ID starting with 5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67 not found: ID does not exist" containerID="5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.327358 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67"} err="failed to get container status \"5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67\": rpc error: code = NotFound desc = could not find container \"5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67\": container with ID starting with 5dcec6047159bcb2730a2db83ec4a45824935af2820ab51c10ecb660f5cc9f67 not found: ID does not exist" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.327397 4756 scope.go:117] "RemoveContainer" containerID="ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.352890 4756 scope.go:117] "RemoveContainer" containerID="77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.374689 4756 scope.go:117] "RemoveContainer" containerID="51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.393893 4756 scope.go:117] "RemoveContainer" containerID="ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c" Feb 24 00:15:59 crc kubenswrapper[4756]: E0224 00:15:59.395163 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c\": container with ID starting with ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c not found: ID does not exist" containerID="ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.395235 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c"} err="failed to get container status \"ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c\": rpc error: code = NotFound desc = could not find container \"ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c\": container with ID starting with ca62aa832f4d9e20b22de1df1a9745650ecb90205c66747e833e88a7896ba66c not found: ID does not exist" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.395286 4756 scope.go:117] "RemoveContainer" containerID="77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd" Feb 24 00:15:59 crc kubenswrapper[4756]: E0224 00:15:59.395927 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd\": container with ID starting with 77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd not found: ID does not exist" containerID="77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.395999 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd"} err="failed to get container status \"77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd\": rpc error: code = NotFound desc = could not find container \"77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd\": container with ID starting with 77a6f5df9b866aa1e556e88f287e5a028cbf74c1bf37146b2564acde46593fcd not found: ID does not exist" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.396325 4756 scope.go:117] "RemoveContainer" containerID="51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e" Feb 24 00:15:59 crc kubenswrapper[4756]: E0224 00:15:59.396720 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e\": container with ID starting with 51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e not found: ID does not exist" containerID="51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.396762 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e"} err="failed to get container status \"51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e\": rpc error: code = NotFound desc = could not find container \"51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e\": container with ID starting with 51039dc7e40092c4b9d026b9aa179951c5d6c031eb49798af7db5a5fb13ae88e not found: ID does not exist" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.853921 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" path="/var/lib/kubelet/pods/14871dd9-6e40-4815-96fb-36f1dbbe2a1b/volumes" Feb 24 00:15:59 crc kubenswrapper[4756]: I0224 00:15:59.854578 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" path="/var/lib/kubelet/pods/7b06bc00-ef2e-451a-b07d-da301f20df31/volumes" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.066199 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw"] Feb 24 00:16:02 crc kubenswrapper[4756]: E0224 00:16:02.066788 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerName="extract-utilities" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.066806 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerName="extract-utilities" Feb 24 00:16:02 crc kubenswrapper[4756]: E0224 00:16:02.066830 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerName="registry-server" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.066836 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerName="registry-server" Feb 24 00:16:02 crc kubenswrapper[4756]: E0224 00:16:02.066843 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="extract-content" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.066851 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="extract-content" Feb 24 00:16:02 crc kubenswrapper[4756]: E0224 00:16:02.066865 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="registry-server" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.066873 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="registry-server" Feb 24 00:16:02 crc kubenswrapper[4756]: E0224 00:16:02.066882 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerName="extract-content" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.066888 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerName="extract-content" Feb 24 00:16:02 crc kubenswrapper[4756]: E0224 00:16:02.066894 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="extract-utilities" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.066900 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="extract-utilities" Feb 24 00:16:02 crc kubenswrapper[4756]: E0224 00:16:02.066911 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" containerName="collect-profiles" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.066917 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" containerName="collect-profiles" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.067038 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b943ae93-380c-4c31-aaa1-0ab4bf6b0d71" containerName="collect-profiles" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.067053 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b06bc00-ef2e-451a-b07d-da301f20df31" containerName="registry-server" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.067079 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="14871dd9-6e40-4815-96fb-36f1dbbe2a1b" containerName="registry-server" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.067921 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.069866 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.082617 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw"] Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.238371 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.238598 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.238727 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdh8r\" (UniqueName: \"kubernetes.io/projected/315518ca-e2d6-4701-9e8a-6792f2e4df31-kube-api-access-zdh8r\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.340971 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.341077 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdh8r\" (UniqueName: \"kubernetes.io/projected/315518ca-e2d6-4701-9e8a-6792f2e4df31-kube-api-access-zdh8r\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.341131 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.341660 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.341694 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.369806 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdh8r\" (UniqueName: \"kubernetes.io/projected/315518ca-e2d6-4701-9e8a-6792f2e4df31-kube-api-access-zdh8r\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.439502 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:02 crc kubenswrapper[4756]: I0224 00:16:02.684369 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw"] Feb 24 00:16:03 crc kubenswrapper[4756]: I0224 00:16:03.293766 4756 generic.go:334] "Generic (PLEG): container finished" podID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerID="4cc0bde225d3605d52eaeccfb5285ffbc892d1702340ad617642c82662049373" exitCode=0 Feb 24 00:16:03 crc kubenswrapper[4756]: I0224 00:16:03.293914 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" event={"ID":"315518ca-e2d6-4701-9e8a-6792f2e4df31","Type":"ContainerDied","Data":"4cc0bde225d3605d52eaeccfb5285ffbc892d1702340ad617642c82662049373"} Feb 24 00:16:03 crc kubenswrapper[4756]: I0224 00:16:03.294295 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" event={"ID":"315518ca-e2d6-4701-9e8a-6792f2e4df31","Type":"ContainerStarted","Data":"ca8d9108bb4f43524b819e77d4477d66e45c0b6a5d34b95eb7fc883376fed305"} Feb 24 00:16:03 crc kubenswrapper[4756]: I0224 00:16:03.297834 4756 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 24 00:16:05 crc kubenswrapper[4756]: I0224 00:16:05.308561 4756 generic.go:334] "Generic (PLEG): container finished" podID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerID="5181a7be30254c269e01ea68477dd34b4e4fbe6ecd87a338d00e06d973659b30" exitCode=0 Feb 24 00:16:05 crc kubenswrapper[4756]: I0224 00:16:05.308630 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" event={"ID":"315518ca-e2d6-4701-9e8a-6792f2e4df31","Type":"ContainerDied","Data":"5181a7be30254c269e01ea68477dd34b4e4fbe6ecd87a338d00e06d973659b30"} Feb 24 00:16:06 crc kubenswrapper[4756]: I0224 00:16:06.317549 4756 generic.go:334] "Generic (PLEG): container finished" podID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerID="983278381a958d74bcaf7db3c885f94f89ef87359452f80c804390ff81925b10" exitCode=0 Feb 24 00:16:06 crc kubenswrapper[4756]: I0224 00:16:06.317699 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" event={"ID":"315518ca-e2d6-4701-9e8a-6792f2e4df31","Type":"ContainerDied","Data":"983278381a958d74bcaf7db3c885f94f89ef87359452f80c804390ff81925b10"} Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.571174 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.724219 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-bundle\") pod \"315518ca-e2d6-4701-9e8a-6792f2e4df31\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.724410 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdh8r\" (UniqueName: \"kubernetes.io/projected/315518ca-e2d6-4701-9e8a-6792f2e4df31-kube-api-access-zdh8r\") pod \"315518ca-e2d6-4701-9e8a-6792f2e4df31\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.724564 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-util\") pod \"315518ca-e2d6-4701-9e8a-6792f2e4df31\" (UID: \"315518ca-e2d6-4701-9e8a-6792f2e4df31\") " Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.729468 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-bundle" (OuterVolumeSpecName: "bundle") pod "315518ca-e2d6-4701-9e8a-6792f2e4df31" (UID: "315518ca-e2d6-4701-9e8a-6792f2e4df31"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.731521 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315518ca-e2d6-4701-9e8a-6792f2e4df31-kube-api-access-zdh8r" (OuterVolumeSpecName: "kube-api-access-zdh8r") pod "315518ca-e2d6-4701-9e8a-6792f2e4df31" (UID: "315518ca-e2d6-4701-9e8a-6792f2e4df31"). InnerVolumeSpecName "kube-api-access-zdh8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.740945 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-util" (OuterVolumeSpecName: "util") pod "315518ca-e2d6-4701-9e8a-6792f2e4df31" (UID: "315518ca-e2d6-4701-9e8a-6792f2e4df31"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.825674 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.825718 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdh8r\" (UniqueName: \"kubernetes.io/projected/315518ca-e2d6-4701-9e8a-6792f2e4df31-kube-api-access-zdh8r\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:07 crc kubenswrapper[4756]: I0224 00:16:07.825730 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/315518ca-e2d6-4701-9e8a-6792f2e4df31-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.336044 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" event={"ID":"315518ca-e2d6-4701-9e8a-6792f2e4df31","Type":"ContainerDied","Data":"ca8d9108bb4f43524b819e77d4477d66e45c0b6a5d34b95eb7fc883376fed305"} Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.336365 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca8d9108bb4f43524b819e77d4477d66e45c0b6a5d34b95eb7fc883376fed305" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.336272 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.871653 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65"] Feb 24 00:16:08 crc kubenswrapper[4756]: E0224 00:16:08.871934 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerName="pull" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.871950 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerName="pull" Feb 24 00:16:08 crc kubenswrapper[4756]: E0224 00:16:08.871964 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerName="util" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.871970 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerName="util" Feb 24 00:16:08 crc kubenswrapper[4756]: E0224 00:16:08.871986 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerName="extract" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.871992 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerName="extract" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.872114 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="315518ca-e2d6-4701-9e8a-6792f2e4df31" containerName="extract" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.876331 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.883384 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.886730 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65"] Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.957729 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7pdl\" (UniqueName: \"kubernetes.io/projected/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-kube-api-access-q7pdl\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.957796 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:08 crc kubenswrapper[4756]: I0224 00:16:08.957831 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.060053 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7pdl\" (UniqueName: \"kubernetes.io/projected/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-kube-api-access-q7pdl\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.060159 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.060183 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.060813 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.061297 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.083665 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7pdl\" (UniqueName: \"kubernetes.io/projected/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-kube-api-access-q7pdl\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.249425 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.685395 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65"] Feb 24 00:16:09 crc kubenswrapper[4756]: W0224 00:16:09.693250 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fd8c967_ae3a_481d_8f42_b1f23ad5c7f9.slice/crio-eab34151087527d0de7a911854f2e50daed1b7c0216f34bb433fc2e60f2b522e WatchSource:0}: Error finding container eab34151087527d0de7a911854f2e50daed1b7c0216f34bb433fc2e60f2b522e: Status 404 returned error can't find the container with id eab34151087527d0de7a911854f2e50daed1b7c0216f34bb433fc2e60f2b522e Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.874380 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9"] Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.876556 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.884591 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9"] Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.974458 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.974555 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvd6j\" (UniqueName: \"kubernetes.io/projected/b4cfc239-ad75-473e-a27a-67bf9092971d-kube-api-access-wvd6j\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:09 crc kubenswrapper[4756]: I0224 00:16:09.974652 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.076566 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.076674 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvd6j\" (UniqueName: \"kubernetes.io/projected/b4cfc239-ad75-473e-a27a-67bf9092971d-kube-api-access-wvd6j\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.076717 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.077285 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.077329 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.097025 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvd6j\" (UniqueName: \"kubernetes.io/projected/b4cfc239-ad75-473e-a27a-67bf9092971d-kube-api-access-wvd6j\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.211570 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.353994 4756 generic.go:334] "Generic (PLEG): container finished" podID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerID="ab3f15bc3cf15e522eebe9651c5907ee9e0c991381575ab3ffc4e2729f21e6f2" exitCode=0 Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.354104 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" event={"ID":"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9","Type":"ContainerDied","Data":"ab3f15bc3cf15e522eebe9651c5907ee9e0c991381575ab3ffc4e2729f21e6f2"} Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.354552 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" event={"ID":"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9","Type":"ContainerStarted","Data":"eab34151087527d0de7a911854f2e50daed1b7c0216f34bb433fc2e60f2b522e"} Feb 24 00:16:10 crc kubenswrapper[4756]: I0224 00:16:10.465374 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9"] Feb 24 00:16:11 crc kubenswrapper[4756]: I0224 00:16:11.364327 4756 generic.go:334] "Generic (PLEG): container finished" podID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerID="0d0a63488b9195f46b6313be7094116e4c18acb855b6685836851ef9851b723d" exitCode=0 Feb 24 00:16:11 crc kubenswrapper[4756]: I0224 00:16:11.364401 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" event={"ID":"b4cfc239-ad75-473e-a27a-67bf9092971d","Type":"ContainerDied","Data":"0d0a63488b9195f46b6313be7094116e4c18acb855b6685836851ef9851b723d"} Feb 24 00:16:11 crc kubenswrapper[4756]: I0224 00:16:11.364542 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" event={"ID":"b4cfc239-ad75-473e-a27a-67bf9092971d","Type":"ContainerStarted","Data":"e35bad57972ed9786dc3721377869d5ad79739190284efc81041c1a6637f004e"} Feb 24 00:16:13 crc kubenswrapper[4756]: I0224 00:16:13.376765 4756 generic.go:334] "Generic (PLEG): container finished" podID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerID="d10d2b1be6ce663b7ae0d5bdd944856316ab96a42b22e8c7a9d848301744ba57" exitCode=0 Feb 24 00:16:13 crc kubenswrapper[4756]: I0224 00:16:13.377576 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" event={"ID":"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9","Type":"ContainerDied","Data":"d10d2b1be6ce663b7ae0d5bdd944856316ab96a42b22e8c7a9d848301744ba57"} Feb 24 00:16:14 crc kubenswrapper[4756]: I0224 00:16:14.394966 4756 generic.go:334] "Generic (PLEG): container finished" podID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerID="b6603fd8101ccf4f067c07f0f5f57e91c8c47f85ebbaa4ac4dbd816a36af3bb9" exitCode=0 Feb 24 00:16:14 crc kubenswrapper[4756]: I0224 00:16:14.395081 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" event={"ID":"b4cfc239-ad75-473e-a27a-67bf9092971d","Type":"ContainerDied","Data":"b6603fd8101ccf4f067c07f0f5f57e91c8c47f85ebbaa4ac4dbd816a36af3bb9"} Feb 24 00:16:14 crc kubenswrapper[4756]: I0224 00:16:14.399254 4756 generic.go:334] "Generic (PLEG): container finished" podID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerID="5709ab8d4bc3d3a000c54ad5fdb0b5ee877d087e2a25501f3b430bf0d9bd79af" exitCode=0 Feb 24 00:16:14 crc kubenswrapper[4756]: I0224 00:16:14.399306 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" event={"ID":"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9","Type":"ContainerDied","Data":"5709ab8d4bc3d3a000c54ad5fdb0b5ee877d087e2a25501f3b430bf0d9bd79af"} Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.409208 4756 generic.go:334] "Generic (PLEG): container finished" podID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerID="066d1bbba3f47a3fcd93b981a51d356631861d20ea2951751026877ebf56814b" exitCode=0 Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.409292 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" event={"ID":"b4cfc239-ad75-473e-a27a-67bf9092971d","Type":"ContainerDied","Data":"066d1bbba3f47a3fcd93b981a51d356631861d20ea2951751026877ebf56814b"} Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.792487 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.953589 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7pdl\" (UniqueName: \"kubernetes.io/projected/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-kube-api-access-q7pdl\") pod \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.953683 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-util\") pod \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.953841 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-bundle\") pod \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\" (UID: \"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9\") " Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.955156 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-bundle" (OuterVolumeSpecName: "bundle") pod "4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" (UID: "4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.969402 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-kube-api-access-q7pdl" (OuterVolumeSpecName: "kube-api-access-q7pdl") pod "4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" (UID: "4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9"). InnerVolumeSpecName "kube-api-access-q7pdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:16:15 crc kubenswrapper[4756]: I0224 00:16:15.976393 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-util" (OuterVolumeSpecName: "util") pod "4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" (UID: "4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.055157 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.055207 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7pdl\" (UniqueName: \"kubernetes.io/projected/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-kube-api-access-q7pdl\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.055222 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.421818 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" event={"ID":"4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9","Type":"ContainerDied","Data":"eab34151087527d0de7a911854f2e50daed1b7c0216f34bb433fc2e60f2b522e"} Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.422284 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab34151087527d0de7a911854f2e50daed1b7c0216f34bb433fc2e60f2b522e" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.436881 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.813143 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.969122 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-bundle\") pod \"b4cfc239-ad75-473e-a27a-67bf9092971d\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.969626 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvd6j\" (UniqueName: \"kubernetes.io/projected/b4cfc239-ad75-473e-a27a-67bf9092971d-kube-api-access-wvd6j\") pod \"b4cfc239-ad75-473e-a27a-67bf9092971d\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.969711 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-util\") pod \"b4cfc239-ad75-473e-a27a-67bf9092971d\" (UID: \"b4cfc239-ad75-473e-a27a-67bf9092971d\") " Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.979404 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4cfc239-ad75-473e-a27a-67bf9092971d-kube-api-access-wvd6j" (OuterVolumeSpecName: "kube-api-access-wvd6j") pod "b4cfc239-ad75-473e-a27a-67bf9092971d" (UID: "b4cfc239-ad75-473e-a27a-67bf9092971d"). InnerVolumeSpecName "kube-api-access-wvd6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.979630 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-bundle" (OuterVolumeSpecName: "bundle") pod "b4cfc239-ad75-473e-a27a-67bf9092971d" (UID: "b4cfc239-ad75-473e-a27a-67bf9092971d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:16:16 crc kubenswrapper[4756]: I0224 00:16:16.980464 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-util" (OuterVolumeSpecName: "util") pod "b4cfc239-ad75-473e-a27a-67bf9092971d" (UID: "b4cfc239-ad75-473e-a27a-67bf9092971d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.070998 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.071043 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvd6j\" (UniqueName: \"kubernetes.io/projected/b4cfc239-ad75-473e-a27a-67bf9092971d-kube-api-access-wvd6j\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.071058 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4cfc239-ad75-473e-a27a-67bf9092971d-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.362751 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr"] Feb 24 00:16:17 crc kubenswrapper[4756]: E0224 00:16:17.363024 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerName="extract" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.363039 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerName="extract" Feb 24 00:16:17 crc kubenswrapper[4756]: E0224 00:16:17.363053 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerName="util" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.363079 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerName="util" Feb 24 00:16:17 crc kubenswrapper[4756]: E0224 00:16:17.363095 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerName="util" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.363102 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerName="util" Feb 24 00:16:17 crc kubenswrapper[4756]: E0224 00:16:17.363113 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerName="extract" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.363120 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerName="extract" Feb 24 00:16:17 crc kubenswrapper[4756]: E0224 00:16:17.363132 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerName="pull" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.363139 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerName="pull" Feb 24 00:16:17 crc kubenswrapper[4756]: E0224 00:16:17.363148 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerName="pull" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.363154 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerName="pull" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.363262 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9" containerName="extract" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.363271 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4cfc239-ad75-473e-a27a-67bf9092971d" containerName="extract" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.364149 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.386415 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr"] Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.431399 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" event={"ID":"b4cfc239-ad75-473e-a27a-67bf9092971d","Type":"ContainerDied","Data":"e35bad57972ed9786dc3721377869d5ad79739190284efc81041c1a6637f004e"} Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.431450 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35bad57972ed9786dc3721377869d5ad79739190284efc81041c1a6637f004e" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.431463 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.476307 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.476375 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.476477 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2zjt\" (UniqueName: \"kubernetes.io/projected/067fb988-cad9-439c-b782-2d988453e44a-kube-api-access-q2zjt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.577291 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.577374 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.577442 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2zjt\" (UniqueName: \"kubernetes.io/projected/067fb988-cad9-439c-b782-2d988453e44a-kube-api-access-q2zjt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.577906 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.577983 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.607606 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2zjt\" (UniqueName: \"kubernetes.io/projected/067fb988-cad9-439c-b782-2d988453e44a-kube-api-access-q2zjt\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:17 crc kubenswrapper[4756]: I0224 00:16:17.680582 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.156022 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr"] Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.440820 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" event={"ID":"067fb988-cad9-439c-b782-2d988453e44a","Type":"ContainerStarted","Data":"e18449568669783e81b368baf9d49e6aff7d2e70ac2fbf3827e19d002430e119"} Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.440883 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" event={"ID":"067fb988-cad9-439c-b782-2d988453e44a","Type":"ContainerStarted","Data":"a5f39055f5808bb90562d5a2de802bb8e0131fc05af7de2fabe447f95cafad37"} Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.794146 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq"] Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.795724 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.798589 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.798741 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-z9qpk" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.809028 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.820276 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq"] Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.895547 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6q66\" (UniqueName: \"kubernetes.io/projected/22e3c396-5773-4933-a2fe-7a0250aee650-kube-api-access-f6q66\") pod \"obo-prometheus-operator-68bc856cb9-x8qdq\" (UID: \"22e3c396-5773-4933-a2fe-7a0250aee650\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.961706 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl"] Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.962564 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.964897 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g99qx" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.972994 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.990644 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t"] Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.991697 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" Feb 24 00:16:18 crc kubenswrapper[4756]: I0224 00:16:18.997049 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6q66\" (UniqueName: \"kubernetes.io/projected/22e3c396-5773-4933-a2fe-7a0250aee650-kube-api-access-f6q66\") pod \"obo-prometheus-operator-68bc856cb9-x8qdq\" (UID: \"22e3c396-5773-4933-a2fe-7a0250aee650\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.011417 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl"] Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.042334 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6q66\" (UniqueName: \"kubernetes.io/projected/22e3c396-5773-4933-a2fe-7a0250aee650-kube-api-access-f6q66\") pod \"obo-prometheus-operator-68bc856cb9-x8qdq\" (UID: \"22e3c396-5773-4933-a2fe-7a0250aee650\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.098784 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28ae67eb-01b5-40ad-8076-9013470234a9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl\" (UID: \"28ae67eb-01b5-40ad-8076-9013470234a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.098862 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t\" (UID: \"f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.098955 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t\" (UID: \"f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.099116 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28ae67eb-01b5-40ad-8076-9013470234a9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl\" (UID: \"28ae67eb-01b5-40ad-8076-9013470234a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.106808 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t"] Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.110035 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.153580 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-82zhl"] Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.154376 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.156385 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.156567 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ncgq2" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.174673 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-82zhl"] Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.202960 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t\" (UID: \"f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.203027 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t\" (UID: \"f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.203101 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28ae67eb-01b5-40ad-8076-9013470234a9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl\" (UID: \"28ae67eb-01b5-40ad-8076-9013470234a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.203172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28ae67eb-01b5-40ad-8076-9013470234a9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl\" (UID: \"28ae67eb-01b5-40ad-8076-9013470234a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.208180 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t\" (UID: \"f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.211369 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t\" (UID: \"f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.214737 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/28ae67eb-01b5-40ad-8076-9013470234a9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl\" (UID: \"28ae67eb-01b5-40ad-8076-9013470234a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.215696 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/28ae67eb-01b5-40ad-8076-9013470234a9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl\" (UID: \"28ae67eb-01b5-40ad-8076-9013470234a9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.284407 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.305854 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs27g\" (UniqueName: \"kubernetes.io/projected/239cf156-509e-41e1-b1ac-f3ebe3fb4067-kube-api-access-xs27g\") pod \"observability-operator-59bdc8b94-82zhl\" (UID: \"239cf156-509e-41e1-b1ac-f3ebe3fb4067\") " pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.305957 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/239cf156-509e-41e1-b1ac-f3ebe3fb4067-observability-operator-tls\") pod \"observability-operator-59bdc8b94-82zhl\" (UID: \"239cf156-509e-41e1-b1ac-f3ebe3fb4067\") " pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.310215 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.336830 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-cqnkq"] Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.337723 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.340653 4756 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-g27zm" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.399996 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-cqnkq"] Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.408946 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/96e7fe96-4c58-44fd-b5a2-0fffa0e28e29-openshift-service-ca\") pod \"perses-operator-5bf474d74f-cqnkq\" (UID: \"96e7fe96-4c58-44fd-b5a2-0fffa0e28e29\") " pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.409012 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/239cf156-509e-41e1-b1ac-f3ebe3fb4067-observability-operator-tls\") pod \"observability-operator-59bdc8b94-82zhl\" (UID: \"239cf156-509e-41e1-b1ac-f3ebe3fb4067\") " pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.409081 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6jls\" (UniqueName: \"kubernetes.io/projected/96e7fe96-4c58-44fd-b5a2-0fffa0e28e29-kube-api-access-p6jls\") pod \"perses-operator-5bf474d74f-cqnkq\" (UID: \"96e7fe96-4c58-44fd-b5a2-0fffa0e28e29\") " pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.409114 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs27g\" (UniqueName: \"kubernetes.io/projected/239cf156-509e-41e1-b1ac-f3ebe3fb4067-kube-api-access-xs27g\") pod \"observability-operator-59bdc8b94-82zhl\" (UID: \"239cf156-509e-41e1-b1ac-f3ebe3fb4067\") " pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.430138 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/239cf156-509e-41e1-b1ac-f3ebe3fb4067-observability-operator-tls\") pod \"observability-operator-59bdc8b94-82zhl\" (UID: \"239cf156-509e-41e1-b1ac-f3ebe3fb4067\") " pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.433860 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs27g\" (UniqueName: \"kubernetes.io/projected/239cf156-509e-41e1-b1ac-f3ebe3fb4067-kube-api-access-xs27g\") pod \"observability-operator-59bdc8b94-82zhl\" (UID: \"239cf156-509e-41e1-b1ac-f3ebe3fb4067\") " pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.458554 4756 generic.go:334] "Generic (PLEG): container finished" podID="067fb988-cad9-439c-b782-2d988453e44a" containerID="e18449568669783e81b368baf9d49e6aff7d2e70ac2fbf3827e19d002430e119" exitCode=0 Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.458602 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" event={"ID":"067fb988-cad9-439c-b782-2d988453e44a","Type":"ContainerDied","Data":"e18449568669783e81b368baf9d49e6aff7d2e70ac2fbf3827e19d002430e119"} Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.477320 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.491325 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq"] Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.511515 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6jls\" (UniqueName: \"kubernetes.io/projected/96e7fe96-4c58-44fd-b5a2-0fffa0e28e29-kube-api-access-p6jls\") pod \"perses-operator-5bf474d74f-cqnkq\" (UID: \"96e7fe96-4c58-44fd-b5a2-0fffa0e28e29\") " pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.511604 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/96e7fe96-4c58-44fd-b5a2-0fffa0e28e29-openshift-service-ca\") pod \"perses-operator-5bf474d74f-cqnkq\" (UID: \"96e7fe96-4c58-44fd-b5a2-0fffa0e28e29\") " pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.512542 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/96e7fe96-4c58-44fd-b5a2-0fffa0e28e29-openshift-service-ca\") pod \"perses-operator-5bf474d74f-cqnkq\" (UID: \"96e7fe96-4c58-44fd-b5a2-0fffa0e28e29\") " pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.536800 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6jls\" (UniqueName: \"kubernetes.io/projected/96e7fe96-4c58-44fd-b5a2-0fffa0e28e29-kube-api-access-p6jls\") pod \"perses-operator-5bf474d74f-cqnkq\" (UID: \"96e7fe96-4c58-44fd-b5a2-0fffa0e28e29\") " pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.692592 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.738046 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl"] Feb 24 00:16:19 crc kubenswrapper[4756]: W0224 00:16:19.778109 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28ae67eb_01b5_40ad_8076_9013470234a9.slice/crio-641f163761657f83acb4f53d297464404f33269692e38822d59d21a9b662bac9 WatchSource:0}: Error finding container 641f163761657f83acb4f53d297464404f33269692e38822d59d21a9b662bac9: Status 404 returned error can't find the container with id 641f163761657f83acb4f53d297464404f33269692e38822d59d21a9b662bac9 Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.785357 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t"] Feb 24 00:16:19 crc kubenswrapper[4756]: I0224 00:16:19.868737 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-82zhl"] Feb 24 00:16:19 crc kubenswrapper[4756]: W0224 00:16:19.898821 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod239cf156_509e_41e1_b1ac_f3ebe3fb4067.slice/crio-65fbe1e455e77b3a22377c06e2431d1b3502469fa10daaaebe706bba780eb82c WatchSource:0}: Error finding container 65fbe1e455e77b3a22377c06e2431d1b3502469fa10daaaebe706bba780eb82c: Status 404 returned error can't find the container with id 65fbe1e455e77b3a22377c06e2431d1b3502469fa10daaaebe706bba780eb82c Feb 24 00:16:20 crc kubenswrapper[4756]: I0224 00:16:20.027012 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-cqnkq"] Feb 24 00:16:20 crc kubenswrapper[4756]: W0224 00:16:20.050091 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96e7fe96_4c58_44fd_b5a2_0fffa0e28e29.slice/crio-9773b9db9f467d176cd128ae1a6416a5694a2b9ffe2c28cb5a68deea6fca2a06 WatchSource:0}: Error finding container 9773b9db9f467d176cd128ae1a6416a5694a2b9ffe2c28cb5a68deea6fca2a06: Status 404 returned error can't find the container with id 9773b9db9f467d176cd128ae1a6416a5694a2b9ffe2c28cb5a68deea6fca2a06 Feb 24 00:16:20 crc kubenswrapper[4756]: I0224 00:16:20.481282 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" event={"ID":"22e3c396-5773-4933-a2fe-7a0250aee650","Type":"ContainerStarted","Data":"f0f4dcc354f76e24aaedb304c3dc73f5c5dde9f1e0aef459858427037ffa49f7"} Feb 24 00:16:20 crc kubenswrapper[4756]: I0224 00:16:20.486916 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" event={"ID":"96e7fe96-4c58-44fd-b5a2-0fffa0e28e29","Type":"ContainerStarted","Data":"9773b9db9f467d176cd128ae1a6416a5694a2b9ffe2c28cb5a68deea6fca2a06"} Feb 24 00:16:20 crc kubenswrapper[4756]: I0224 00:16:20.487804 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" event={"ID":"28ae67eb-01b5-40ad-8076-9013470234a9","Type":"ContainerStarted","Data":"641f163761657f83acb4f53d297464404f33269692e38822d59d21a9b662bac9"} Feb 24 00:16:20 crc kubenswrapper[4756]: I0224 00:16:20.488543 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" event={"ID":"f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6","Type":"ContainerStarted","Data":"a7e62b925aed0d3636c36129661e46b6635de0152e4a895d9565937f4a2ec8e4"} Feb 24 00:16:20 crc kubenswrapper[4756]: I0224 00:16:20.489264 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-82zhl" event={"ID":"239cf156-509e-41e1-b1ac-f3ebe3fb4067","Type":"ContainerStarted","Data":"65fbe1e455e77b3a22377c06e2431d1b3502469fa10daaaebe706bba780eb82c"} Feb 24 00:16:22 crc kubenswrapper[4756]: I0224 00:16:22.711396 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:16:22 crc kubenswrapper[4756]: I0224 00:16:22.711796 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:16:22 crc kubenswrapper[4756]: I0224 00:16:22.711848 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:16:22 crc kubenswrapper[4756]: I0224 00:16:22.712537 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a250b84fb5cf3d50f1c8f73096c01a386ce6306ff496adf3c239fdd204f80ec3"} pod="openshift-machine-config-operator/machine-config-daemon-qb88h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:16:22 crc kubenswrapper[4756]: I0224 00:16:22.712597 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" containerID="cri-o://a250b84fb5cf3d50f1c8f73096c01a386ce6306ff496adf3c239fdd204f80ec3" gracePeriod=600 Feb 24 00:16:23 crc kubenswrapper[4756]: I0224 00:16:23.569928 4756 generic.go:334] "Generic (PLEG): container finished" podID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerID="a250b84fb5cf3d50f1c8f73096c01a386ce6306ff496adf3c239fdd204f80ec3" exitCode=0 Feb 24 00:16:23 crc kubenswrapper[4756]: I0224 00:16:23.569989 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerDied","Data":"a250b84fb5cf3d50f1c8f73096c01a386ce6306ff496adf3c239fdd204f80ec3"} Feb 24 00:16:23 crc kubenswrapper[4756]: I0224 00:16:23.570524 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerStarted","Data":"1d01c4008be0b2356d9dc9b4c088f54b7dea3397664e049fd73bc626db7cec60"} Feb 24 00:16:23 crc kubenswrapper[4756]: I0224 00:16:23.570553 4756 scope.go:117] "RemoveContainer" containerID="3462137efdd08477626614f3719291731120c43a9269032f0fe7f282c877172c" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.499173 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6c5fd479fc-8x7gc"] Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.500558 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.502824 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.503568 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.503766 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.505381 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-4sblk" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.529472 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6c5fd479fc-8x7gc"] Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.619994 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxzdm\" (UniqueName: \"kubernetes.io/projected/32e25cf6-d4bc-4e75-9853-8122f049c43d-kube-api-access-qxzdm\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.620153 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32e25cf6-d4bc-4e75-9853-8122f049c43d-apiservice-cert\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.620187 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32e25cf6-d4bc-4e75-9853-8122f049c43d-webhook-cert\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.721926 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxzdm\" (UniqueName: \"kubernetes.io/projected/32e25cf6-d4bc-4e75-9853-8122f049c43d-kube-api-access-qxzdm\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.722026 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32e25cf6-d4bc-4e75-9853-8122f049c43d-apiservice-cert\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.722138 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32e25cf6-d4bc-4e75-9853-8122f049c43d-webhook-cert\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.729608 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32e25cf6-d4bc-4e75-9853-8122f049c43d-webhook-cert\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.729772 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32e25cf6-d4bc-4e75-9853-8122f049c43d-apiservice-cert\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.743198 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxzdm\" (UniqueName: \"kubernetes.io/projected/32e25cf6-d4bc-4e75-9853-8122f049c43d-kube-api-access-qxzdm\") pod \"elastic-operator-6c5fd479fc-8x7gc\" (UID: \"32e25cf6-d4bc-4e75-9853-8122f049c43d\") " pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:24 crc kubenswrapper[4756]: I0224 00:16:24.827347 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" Feb 24 00:16:30 crc kubenswrapper[4756]: I0224 00:16:30.153915 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xmmxf"] Feb 24 00:16:30 crc kubenswrapper[4756]: I0224 00:16:30.155802 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-xmmxf" Feb 24 00:16:30 crc kubenswrapper[4756]: I0224 00:16:30.162270 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-dxq7g" Feb 24 00:16:30 crc kubenswrapper[4756]: I0224 00:16:30.169300 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xmmxf"] Feb 24 00:16:30 crc kubenswrapper[4756]: I0224 00:16:30.308571 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp24t\" (UniqueName: \"kubernetes.io/projected/56cbf5c8-f834-4cff-975e-c64c3b490331-kube-api-access-vp24t\") pod \"interconnect-operator-5bb49f789d-xmmxf\" (UID: \"56cbf5c8-f834-4cff-975e-c64c3b490331\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xmmxf" Feb 24 00:16:30 crc kubenswrapper[4756]: I0224 00:16:30.409979 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp24t\" (UniqueName: \"kubernetes.io/projected/56cbf5c8-f834-4cff-975e-c64c3b490331-kube-api-access-vp24t\") pod \"interconnect-operator-5bb49f789d-xmmxf\" (UID: \"56cbf5c8-f834-4cff-975e-c64c3b490331\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xmmxf" Feb 24 00:16:30 crc kubenswrapper[4756]: I0224 00:16:30.435582 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp24t\" (UniqueName: \"kubernetes.io/projected/56cbf5c8-f834-4cff-975e-c64c3b490331-kube-api-access-vp24t\") pod \"interconnect-operator-5bb49f789d-xmmxf\" (UID: \"56cbf5c8-f834-4cff-975e-c64c3b490331\") " pod="service-telemetry/interconnect-operator-5bb49f789d-xmmxf" Feb 24 00:16:30 crc kubenswrapper[4756]: I0224 00:16:30.523940 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-xmmxf" Feb 24 00:16:35 crc kubenswrapper[4756]: E0224 00:16:35.127857 4756 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Feb 24 00:16:35 crc kubenswrapper[4756]: E0224 00:16:35.128936 4756 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f6q66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-x8qdq_openshift-operators(22e3c396-5773-4933-a2fe-7a0250aee650): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 24 00:16:35 crc kubenswrapper[4756]: E0224 00:16:35.130245 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" podUID="22e3c396-5773-4933-a2fe-7a0250aee650" Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.532236 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-xmmxf"] Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.603658 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6c5fd479fc-8x7gc"] Feb 24 00:16:35 crc kubenswrapper[4756]: W0224 00:16:35.621209 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32e25cf6_d4bc_4e75_9853_8122f049c43d.slice/crio-9424dd20c6d503e0df9479a6c85ad73b9bed0fd4fa8ef4f71d3e428ddb26cf54 WatchSource:0}: Error finding container 9424dd20c6d503e0df9479a6c85ad73b9bed0fd4fa8ef4f71d3e428ddb26cf54: Status 404 returned error can't find the container with id 9424dd20c6d503e0df9479a6c85ad73b9bed0fd4fa8ef4f71d3e428ddb26cf54 Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.717773 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" event={"ID":"28ae67eb-01b5-40ad-8076-9013470234a9","Type":"ContainerStarted","Data":"f8f9517a50f0eed7c1189a0798a0152c51c3b8fd42314c6cad562f0044935dcf"} Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.723534 4756 generic.go:334] "Generic (PLEG): container finished" podID="067fb988-cad9-439c-b782-2d988453e44a" containerID="50023317155edb3646df7e0e505133be3ab61cf36850d9837c17856981871468" exitCode=0 Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.723653 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" event={"ID":"067fb988-cad9-439c-b782-2d988453e44a","Type":"ContainerDied","Data":"50023317155edb3646df7e0e505133be3ab61cf36850d9837c17856981871468"} Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.724853 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" event={"ID":"f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6","Type":"ContainerStarted","Data":"7f5c98b18f44f9c12fe8851f51729d52d3df8c29c4396ffe2407990d7b25a0aa"} Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.740473 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-82zhl" event={"ID":"239cf156-509e-41e1-b1ac-f3ebe3fb4067","Type":"ContainerStarted","Data":"51ea0f2e81d85ddc6c8245cd40e1b8bb16753643b5b896849412754e556fea01"} Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.740824 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.745312 4756 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-82zhl container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.46:8081/healthz\": dial tcp 10.217.0.46:8081: connect: connection refused" start-of-body= Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.745400 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-82zhl" podUID="239cf156-509e-41e1-b1ac-f3ebe3fb4067" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.46:8081/healthz\": dial tcp 10.217.0.46:8081: connect: connection refused" Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.746821 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" event={"ID":"32e25cf6-d4bc-4e75-9853-8122f049c43d","Type":"ContainerStarted","Data":"9424dd20c6d503e0df9479a6c85ad73b9bed0fd4fa8ef4f71d3e428ddb26cf54"} Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.755359 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-xmmxf" event={"ID":"56cbf5c8-f834-4cff-975e-c64c3b490331","Type":"ContainerStarted","Data":"470e7bd03a21e6be7d3a31afb0c5f4d5800a178b2fa51a13aa0ebc91f08cf99b"} Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.763008 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl" podStartSLOduration=2.362356295 podStartE2EDuration="17.762982267s" podCreationTimestamp="2026-02-24 00:16:18 +0000 UTC" firstStartedPulling="2026-02-24 00:16:19.806280025 +0000 UTC m=+636.717142658" lastFinishedPulling="2026-02-24 00:16:35.206905997 +0000 UTC m=+652.117768630" observedRunningTime="2026-02-24 00:16:35.756906634 +0000 UTC m=+652.667769267" watchObservedRunningTime="2026-02-24 00:16:35.762982267 +0000 UTC m=+652.673844900" Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.764614 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" event={"ID":"96e7fe96-4c58-44fd-b5a2-0fffa0e28e29","Type":"ContainerStarted","Data":"3287b65d6d7ed0e6fba1f958976afcfb19e70353a180f818de887e8c19055066"} Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.764659 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:35 crc kubenswrapper[4756]: E0224 00:16:35.801195 4756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" podUID="22e3c396-5773-4933-a2fe-7a0250aee650" Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.887084 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-82zhl" podStartSLOduration=1.5825603579999998 podStartE2EDuration="16.887049393s" podCreationTimestamp="2026-02-24 00:16:19 +0000 UTC" firstStartedPulling="2026-02-24 00:16:19.9044473 +0000 UTC m=+636.815309933" lastFinishedPulling="2026-02-24 00:16:35.208936345 +0000 UTC m=+652.119798968" observedRunningTime="2026-02-24 00:16:35.845474348 +0000 UTC m=+652.756336981" watchObservedRunningTime="2026-02-24 00:16:35.887049393 +0000 UTC m=+652.797912036" Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.923664 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t" podStartSLOduration=2.502638981 podStartE2EDuration="17.923625856s" podCreationTimestamp="2026-02-24 00:16:18 +0000 UTC" firstStartedPulling="2026-02-24 00:16:19.825866601 +0000 UTC m=+636.736729234" lastFinishedPulling="2026-02-24 00:16:35.246853476 +0000 UTC m=+652.157716109" observedRunningTime="2026-02-24 00:16:35.884794459 +0000 UTC m=+652.795657102" watchObservedRunningTime="2026-02-24 00:16:35.923625856 +0000 UTC m=+652.834488489" Feb 24 00:16:35 crc kubenswrapper[4756]: I0224 00:16:35.980645 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" podStartSLOduration=1.827668992 podStartE2EDuration="16.98062138s" podCreationTimestamp="2026-02-24 00:16:19 +0000 UTC" firstStartedPulling="2026-02-24 00:16:20.054592587 +0000 UTC m=+636.965455220" lastFinishedPulling="2026-02-24 00:16:35.207544975 +0000 UTC m=+652.118407608" observedRunningTime="2026-02-24 00:16:35.978452458 +0000 UTC m=+652.889315091" watchObservedRunningTime="2026-02-24 00:16:35.98062138 +0000 UTC m=+652.891484003" Feb 24 00:16:36 crc kubenswrapper[4756]: I0224 00:16:36.815669 4756 generic.go:334] "Generic (PLEG): container finished" podID="067fb988-cad9-439c-b782-2d988453e44a" containerID="be3051a727d2c3c7c6cf921897658b67ec7a9f7d2c4e8fb97cd209279ff6d2a1" exitCode=0 Feb 24 00:16:36 crc kubenswrapper[4756]: I0224 00:16:36.815771 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" event={"ID":"067fb988-cad9-439c-b782-2d988453e44a","Type":"ContainerDied","Data":"be3051a727d2c3c7c6cf921897658b67ec7a9f7d2c4e8fb97cd209279ff6d2a1"} Feb 24 00:16:36 crc kubenswrapper[4756]: I0224 00:16:36.818421 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-82zhl" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.256449 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.376047 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-util\") pod \"067fb988-cad9-439c-b782-2d988453e44a\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.376159 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2zjt\" (UniqueName: \"kubernetes.io/projected/067fb988-cad9-439c-b782-2d988453e44a-kube-api-access-q2zjt\") pod \"067fb988-cad9-439c-b782-2d988453e44a\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.376182 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-bundle\") pod \"067fb988-cad9-439c-b782-2d988453e44a\" (UID: \"067fb988-cad9-439c-b782-2d988453e44a\") " Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.403819 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-util" (OuterVolumeSpecName: "util") pod "067fb988-cad9-439c-b782-2d988453e44a" (UID: "067fb988-cad9-439c-b782-2d988453e44a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.404265 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-bundle" (OuterVolumeSpecName: "bundle") pod "067fb988-cad9-439c-b782-2d988453e44a" (UID: "067fb988-cad9-439c-b782-2d988453e44a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.404295 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067fb988-cad9-439c-b782-2d988453e44a-kube-api-access-q2zjt" (OuterVolumeSpecName: "kube-api-access-q2zjt") pod "067fb988-cad9-439c-b782-2d988453e44a" (UID: "067fb988-cad9-439c-b782-2d988453e44a"). InnerVolumeSpecName "kube-api-access-q2zjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.478959 4756 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-util\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.479476 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2zjt\" (UniqueName: \"kubernetes.io/projected/067fb988-cad9-439c-b782-2d988453e44a-kube-api-access-q2zjt\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.479488 4756 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/067fb988-cad9-439c-b782-2d988453e44a-bundle\") on node \"crc\" DevicePath \"\"" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.879415 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" event={"ID":"067fb988-cad9-439c-b782-2d988453e44a","Type":"ContainerDied","Data":"a5f39055f5808bb90562d5a2de802bb8e0131fc05af7de2fabe447f95cafad37"} Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.879480 4756 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f39055f5808bb90562d5a2de802bb8e0131fc05af7de2fabe447f95cafad37" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.879493 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr" Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.883097 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" event={"ID":"32e25cf6-d4bc-4e75-9853-8122f049c43d","Type":"ContainerStarted","Data":"32ef4837ab5f18b742fb146a6623906cdc594a5283b1c6e28040fd4a081b0da3"} Feb 24 00:16:39 crc kubenswrapper[4756]: I0224 00:16:39.919026 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6c5fd479fc-8x7gc" podStartSLOduration=12.220024254 podStartE2EDuration="15.919005903s" podCreationTimestamp="2026-02-24 00:16:24 +0000 UTC" firstStartedPulling="2026-02-24 00:16:35.624008566 +0000 UTC m=+652.534871199" lastFinishedPulling="2026-02-24 00:16:39.322990215 +0000 UTC m=+656.233852848" observedRunningTime="2026-02-24 00:16:39.914542916 +0000 UTC m=+656.825405569" watchObservedRunningTime="2026-02-24 00:16:39.919005903 +0000 UTC m=+656.829868536" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.866345 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:16:40 crc kubenswrapper[4756]: E0224 00:16:40.866986 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067fb988-cad9-439c-b782-2d988453e44a" containerName="util" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.866997 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="067fb988-cad9-439c-b782-2d988453e44a" containerName="util" Feb 24 00:16:40 crc kubenswrapper[4756]: E0224 00:16:40.867015 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067fb988-cad9-439c-b782-2d988453e44a" containerName="extract" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.867021 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="067fb988-cad9-439c-b782-2d988453e44a" containerName="extract" Feb 24 00:16:40 crc kubenswrapper[4756]: E0224 00:16:40.867035 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067fb988-cad9-439c-b782-2d988453e44a" containerName="pull" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.867042 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="067fb988-cad9-439c-b782-2d988453e44a" containerName="pull" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.867181 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="067fb988-cad9-439c-b782-2d988453e44a" containerName="extract" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.869358 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.874766 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.875005 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.875698 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.876475 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.876759 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.876879 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.877012 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-lr74s" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.877183 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.877353 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Feb 24 00:16:40 crc kubenswrapper[4756]: I0224 00:16:40.914695 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.003312 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/11ba1f74-b3f8-495d-bba1-3c710f853a4b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.003373 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.003675 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.003866 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.003971 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004101 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004147 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004214 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004247 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004449 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004545 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004584 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004650 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004693 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.004843 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106557 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106624 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106647 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106669 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106695 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106721 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106746 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/11ba1f74-b3f8-495d-bba1-3c710f853a4b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106765 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106810 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106836 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106858 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106878 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106896 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106919 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.106935 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.107449 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.107564 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.107753 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.107976 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.108270 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.108698 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.109699 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.111645 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.114040 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.114512 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/11ba1f74-b3f8-495d-bba1-3c710f853a4b-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.116498 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.118185 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.118590 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.118668 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.135683 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/11ba1f74-b3f8-495d-bba1-3c710f853a4b-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"11ba1f74-b3f8-495d-bba1-3c710f853a4b\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:41 crc kubenswrapper[4756]: I0224 00:16:41.189054 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:16:46 crc kubenswrapper[4756]: I0224 00:16:46.729500 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:16:46 crc kubenswrapper[4756]: I0224 00:16:46.933859 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"11ba1f74-b3f8-495d-bba1-3c710f853a4b","Type":"ContainerStarted","Data":"ca873fa558ce525fd57126f253092e30f9226dd0d4b5b3f8c9cb068532708e75"} Feb 24 00:16:46 crc kubenswrapper[4756]: I0224 00:16:46.935531 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-xmmxf" event={"ID":"56cbf5c8-f834-4cff-975e-c64c3b490331","Type":"ContainerStarted","Data":"a7a7532b9ecf65dd4d7dc940661e4fac5e9fe9137dc2e45c70ba42ab44875b7b"} Feb 24 00:16:49 crc kubenswrapper[4756]: I0224 00:16:49.696250 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-cqnkq" Feb 24 00:16:49 crc kubenswrapper[4756]: I0224 00:16:49.719805 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-xmmxf" podStartSLOduration=8.833621306 podStartE2EDuration="19.719780806s" podCreationTimestamp="2026-02-24 00:16:30 +0000 UTC" firstStartedPulling="2026-02-24 00:16:35.548252416 +0000 UTC m=+652.459115049" lastFinishedPulling="2026-02-24 00:16:46.434411916 +0000 UTC m=+663.345274549" observedRunningTime="2026-02-24 00:16:46.955291413 +0000 UTC m=+663.866154046" watchObservedRunningTime="2026-02-24 00:16:49.719780806 +0000 UTC m=+666.630643439" Feb 24 00:16:51 crc kubenswrapper[4756]: I0224 00:16:51.968632 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" event={"ID":"22e3c396-5773-4933-a2fe-7a0250aee650","Type":"ContainerStarted","Data":"ead6a03065420750e63d4d5a54a75c726ae824296c4d86d8bee3452cae7eda0f"} Feb 24 00:16:52 crc kubenswrapper[4756]: I0224 00:16:52.016425 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-x8qdq" podStartSLOduration=2.027474724 podStartE2EDuration="34.016394764s" podCreationTimestamp="2026-02-24 00:16:18 +0000 UTC" firstStartedPulling="2026-02-24 00:16:19.534601727 +0000 UTC m=+636.445464360" lastFinishedPulling="2026-02-24 00:16:51.523521767 +0000 UTC m=+668.434384400" observedRunningTime="2026-02-24 00:16:52.007836891 +0000 UTC m=+668.918699524" watchObservedRunningTime="2026-02-24 00:16:52.016394764 +0000 UTC m=+668.927257407" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.524593 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h"] Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.532374 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.535597 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-d98zn" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.535880 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.535938 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.539668 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h"] Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.547246 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tqb6\" (UniqueName: \"kubernetes.io/projected/6b474d63-461a-4525-a039-7c44eada14ee-kube-api-access-6tqb6\") pod \"cert-manager-operator-controller-manager-5586865c96-jnk5h\" (UID: \"6b474d63-461a-4525-a039-7c44eada14ee\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.547352 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b474d63-461a-4525-a039-7c44eada14ee-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-jnk5h\" (UID: \"6b474d63-461a-4525-a039-7c44eada14ee\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.648351 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tqb6\" (UniqueName: \"kubernetes.io/projected/6b474d63-461a-4525-a039-7c44eada14ee-kube-api-access-6tqb6\") pod \"cert-manager-operator-controller-manager-5586865c96-jnk5h\" (UID: \"6b474d63-461a-4525-a039-7c44eada14ee\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.648439 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b474d63-461a-4525-a039-7c44eada14ee-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-jnk5h\" (UID: \"6b474d63-461a-4525-a039-7c44eada14ee\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.649281 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6b474d63-461a-4525-a039-7c44eada14ee-tmp\") pod \"cert-manager-operator-controller-manager-5586865c96-jnk5h\" (UID: \"6b474d63-461a-4525-a039-7c44eada14ee\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.671426 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tqb6\" (UniqueName: \"kubernetes.io/projected/6b474d63-461a-4525-a039-7c44eada14ee-kube-api-access-6tqb6\") pod \"cert-manager-operator-controller-manager-5586865c96-jnk5h\" (UID: \"6b474d63-461a-4525-a039-7c44eada14ee\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" Feb 24 00:17:03 crc kubenswrapper[4756]: I0224 00:17:03.854341 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" Feb 24 00:17:04 crc kubenswrapper[4756]: I0224 00:17:04.206483 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h"] Feb 24 00:17:05 crc kubenswrapper[4756]: I0224 00:17:05.106434 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" event={"ID":"6b474d63-461a-4525-a039-7c44eada14ee","Type":"ContainerStarted","Data":"165aba06679480bdfd4ba5bd56e50257f87099c9c605e93f42622fa721bf32d5"} Feb 24 00:17:14 crc kubenswrapper[4756]: I0224 00:17:14.200393 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"11ba1f74-b3f8-495d-bba1-3c710f853a4b","Type":"ContainerStarted","Data":"61b2744c19d11a19a8ab1aa2dfd9ace548aead083a5cfa8e28efdd0d57e58d32"} Feb 24 00:17:14 crc kubenswrapper[4756]: I0224 00:17:14.202445 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" event={"ID":"6b474d63-461a-4525-a039-7c44eada14ee","Type":"ContainerStarted","Data":"68b9dbe91da8fd09c6120db1356ef72a355600f38c47bf94091c3725c8dc1092"} Feb 24 00:17:14 crc kubenswrapper[4756]: I0224 00:17:14.275785 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5586865c96-jnk5h" podStartSLOduration=1.7281458079999998 podStartE2EDuration="11.275760826s" podCreationTimestamp="2026-02-24 00:17:03 +0000 UTC" firstStartedPulling="2026-02-24 00:17:04.229538086 +0000 UTC m=+681.140400709" lastFinishedPulling="2026-02-24 00:17:13.777153094 +0000 UTC m=+690.688015727" observedRunningTime="2026-02-24 00:17:14.272433841 +0000 UTC m=+691.183296474" watchObservedRunningTime="2026-02-24 00:17:14.275760826 +0000 UTC m=+691.186623459" Feb 24 00:17:14 crc kubenswrapper[4756]: I0224 00:17:14.383816 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:17:14 crc kubenswrapper[4756]: I0224 00:17:14.447668 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.217882 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"11ba1f74-b3f8-495d-bba1-3c710f853a4b","Type":"ContainerDied","Data":"61b2744c19d11a19a8ab1aa2dfd9ace548aead083a5cfa8e28efdd0d57e58d32"} Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.217884 4756 generic.go:334] "Generic (PLEG): container finished" podID="11ba1f74-b3f8-495d-bba1-3c710f853a4b" containerID="61b2744c19d11a19a8ab1aa2dfd9ace548aead083a5cfa8e28efdd0d57e58d32" exitCode=0 Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.900901 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.902549 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.905166 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.905634 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.905669 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.905757 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6fxlg" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.919405 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982400 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982491 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982696 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982744 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982773 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982815 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982852 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982937 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zc47\" (UniqueName: \"kubernetes.io/projected/95fdb953-937c-4479-b253-8eb4ba52ee12-kube-api-access-6zc47\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.982986 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.983040 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.983105 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:16 crc kubenswrapper[4756]: I0224 00:17:16.983143 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.075391 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-gxk5z"] Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.076273 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.078642 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.079317 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.079551 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cq66w" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.084944 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085011 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87943173-5ada-4bb0-8baf-d1b8fdf8ea9a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-gxk5z\" (UID: \"87943173-5ada-4bb0-8baf-d1b8fdf8ea9a\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085047 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085091 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085120 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085172 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085218 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085266 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085294 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085321 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085350 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085383 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085411 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zc47\" (UniqueName: \"kubernetes.io/projected/95fdb953-937c-4479-b253-8eb4ba52ee12-kube-api-access-6zc47\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085448 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4n8\" (UniqueName: \"kubernetes.io/projected/87943173-5ada-4bb0-8baf-d1b8fdf8ea9a-kube-api-access-wr4n8\") pod \"cert-manager-webhook-6888856db4-gxk5z\" (UID: \"87943173-5ada-4bb0-8baf-d1b8fdf8ea9a\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.085800 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.086030 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.086470 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.086537 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.086559 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.086737 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.087025 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.087124 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.089467 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.090209 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-gxk5z"] Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.092986 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.094041 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.108830 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zc47\" (UniqueName: \"kubernetes.io/projected/95fdb953-937c-4479-b253-8eb4ba52ee12-kube-api-access-6zc47\") pod \"service-telemetry-operator-1-build\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.186840 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4n8\" (UniqueName: \"kubernetes.io/projected/87943173-5ada-4bb0-8baf-d1b8fdf8ea9a-kube-api-access-wr4n8\") pod \"cert-manager-webhook-6888856db4-gxk5z\" (UID: \"87943173-5ada-4bb0-8baf-d1b8fdf8ea9a\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.186918 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87943173-5ada-4bb0-8baf-d1b8fdf8ea9a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-gxk5z\" (UID: \"87943173-5ada-4bb0-8baf-d1b8fdf8ea9a\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.209791 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87943173-5ada-4bb0-8baf-d1b8fdf8ea9a-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-gxk5z\" (UID: \"87943173-5ada-4bb0-8baf-d1b8fdf8ea9a\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.215782 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4n8\" (UniqueName: \"kubernetes.io/projected/87943173-5ada-4bb0-8baf-d1b8fdf8ea9a-kube-api-access-wr4n8\") pod \"cert-manager-webhook-6888856db4-gxk5z\" (UID: \"87943173-5ada-4bb0-8baf-d1b8fdf8ea9a\") " pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.221441 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.228562 4756 generic.go:334] "Generic (PLEG): container finished" podID="11ba1f74-b3f8-495d-bba1-3c710f853a4b" containerID="9ea0f8333a2ef292060be78d9801ec97545390c3e45471b420c377f6e1119cf1" exitCode=0 Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.228645 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"11ba1f74-b3f8-495d-bba1-3c710f853a4b","Type":"ContainerDied","Data":"9ea0f8333a2ef292060be78d9801ec97545390c3e45471b420c377f6e1119cf1"} Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.463271 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.872027 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:17:17 crc kubenswrapper[4756]: I0224 00:17:17.904957 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-gxk5z"] Feb 24 00:17:17 crc kubenswrapper[4756]: W0224 00:17:17.913956 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87943173_5ada_4bb0_8baf_d1b8fdf8ea9a.slice/crio-5b91bd4f8cded4419af09b6faf6fd675d4118172ac9dedd9a6c8980edd4c5b5f WatchSource:0}: Error finding container 5b91bd4f8cded4419af09b6faf6fd675d4118172ac9dedd9a6c8980edd4c5b5f: Status 404 returned error can't find the container with id 5b91bd4f8cded4419af09b6faf6fd675d4118172ac9dedd9a6c8980edd4c5b5f Feb 24 00:17:18 crc kubenswrapper[4756]: I0224 00:17:18.236315 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"95fdb953-937c-4479-b253-8eb4ba52ee12","Type":"ContainerStarted","Data":"bf89f618812ce2d8278a41fcd9a981a2fde91b894212d2039756e250d7ce429d"} Feb 24 00:17:18 crc kubenswrapper[4756]: I0224 00:17:18.237569 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" event={"ID":"87943173-5ada-4bb0-8baf-d1b8fdf8ea9a","Type":"ContainerStarted","Data":"5b91bd4f8cded4419af09b6faf6fd675d4118172ac9dedd9a6c8980edd4c5b5f"} Feb 24 00:17:18 crc kubenswrapper[4756]: I0224 00:17:18.239946 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"11ba1f74-b3f8-495d-bba1-3c710f853a4b","Type":"ContainerStarted","Data":"300eb3d6b832a391cf9f561589ccc7709d1a1cc1fa57171f1458dd851c400e91"} Feb 24 00:17:18 crc kubenswrapper[4756]: I0224 00:17:18.240358 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:17:18 crc kubenswrapper[4756]: I0224 00:17:18.275232 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=11.105903684 podStartE2EDuration="38.275212719s" podCreationTimestamp="2026-02-24 00:16:40 +0000 UTC" firstStartedPulling="2026-02-24 00:16:46.742144177 +0000 UTC m=+663.653006810" lastFinishedPulling="2026-02-24 00:17:13.911453222 +0000 UTC m=+690.822315845" observedRunningTime="2026-02-24 00:17:18.271602676 +0000 UTC m=+695.182465319" watchObservedRunningTime="2026-02-24 00:17:18.275212719 +0000 UTC m=+695.186075352" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.128473 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n9tl7"] Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.130408 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.133203 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4gh42" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.145085 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n9tl7"] Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.159212 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbvb4\" (UniqueName: \"kubernetes.io/projected/f248b635-79e8-4b65-9006-0c4b074800f5-kube-api-access-gbvb4\") pod \"cert-manager-cainjector-5545bd876-n9tl7\" (UID: \"f248b635-79e8-4b65-9006-0c4b074800f5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.159371 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f248b635-79e8-4b65-9006-0c4b074800f5-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n9tl7\" (UID: \"f248b635-79e8-4b65-9006-0c4b074800f5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.260572 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbvb4\" (UniqueName: \"kubernetes.io/projected/f248b635-79e8-4b65-9006-0c4b074800f5-kube-api-access-gbvb4\") pod \"cert-manager-cainjector-5545bd876-n9tl7\" (UID: \"f248b635-79e8-4b65-9006-0c4b074800f5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.261346 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f248b635-79e8-4b65-9006-0c4b074800f5-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n9tl7\" (UID: \"f248b635-79e8-4b65-9006-0c4b074800f5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.279888 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbvb4\" (UniqueName: \"kubernetes.io/projected/f248b635-79e8-4b65-9006-0c4b074800f5-kube-api-access-gbvb4\") pod \"cert-manager-cainjector-5545bd876-n9tl7\" (UID: \"f248b635-79e8-4b65-9006-0c4b074800f5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.286347 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f248b635-79e8-4b65-9006-0c4b074800f5-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-n9tl7\" (UID: \"f248b635-79e8-4b65-9006-0c4b074800f5\") " pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.456672 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" Feb 24 00:17:19 crc kubenswrapper[4756]: I0224 00:17:19.827916 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-n9tl7"] Feb 24 00:17:19 crc kubenswrapper[4756]: W0224 00:17:19.846922 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf248b635_79e8_4b65_9006_0c4b074800f5.slice/crio-41077df16069a2d101e47008a2c973cb950f02315a55f689025a79043b0bb311 WatchSource:0}: Error finding container 41077df16069a2d101e47008a2c973cb950f02315a55f689025a79043b0bb311: Status 404 returned error can't find the container with id 41077df16069a2d101e47008a2c973cb950f02315a55f689025a79043b0bb311 Feb 24 00:17:20 crc kubenswrapper[4756]: I0224 00:17:20.255975 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" event={"ID":"f248b635-79e8-4b65-9006-0c4b074800f5","Type":"ContainerStarted","Data":"41077df16069a2d101e47008a2c973cb950f02315a55f689025a79043b0bb311"} Feb 24 00:17:27 crc kubenswrapper[4756]: I0224 00:17:27.325354 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.018934 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.021053 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.023847 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.023857 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.024203 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.115918 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128553 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128621 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128649 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128668 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128715 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128736 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128760 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128791 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128809 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbcms\" (UniqueName: \"kubernetes.io/projected/1cf684de-fe48-44ba-b593-9e840c802ea8-kube-api-access-gbcms\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128842 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128865 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.128887 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.230424 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.230586 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.231266 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.231316 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.231412 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbcms\" (UniqueName: \"kubernetes.io/projected/1cf684de-fe48-44ba-b593-9e840c802ea8-kube-api-access-gbcms\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.231474 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.231545 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.231652 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.231938 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232020 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232239 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232303 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232342 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232377 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232444 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232478 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232826 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.232982 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.233643 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.233654 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.234962 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.247191 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.247231 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.252544 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbcms\" (UniqueName: \"kubernetes.io/projected/1cf684de-fe48-44ba-b593-9e840c802ea8-kube-api-access-gbcms\") pod \"service-telemetry-operator-2-build\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.318461 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"95fdb953-937c-4479-b253-8eb4ba52ee12","Type":"ContainerStarted","Data":"05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68"} Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.318593 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="95fdb953-937c-4479-b253-8eb4ba52ee12" containerName="manage-dockerfile" containerID="cri-o://05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68" gracePeriod=30 Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.329016 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" event={"ID":"87943173-5ada-4bb0-8baf-d1b8fdf8ea9a","Type":"ContainerStarted","Data":"b99c308bb8563148d67dec323ef0615ac7bacd42b99312de3b698a134266db37"} Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.329247 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.330732 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" event={"ID":"f248b635-79e8-4b65-9006-0c4b074800f5","Type":"ContainerStarted","Data":"3e6fee9a2fffc6a8e9e889a872426ae4c492bcb3fed02d50b3476b497379bd85"} Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.367232 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.369481 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-n9tl7" podStartSLOduration=1.847535326 podStartE2EDuration="10.36945458s" podCreationTimestamp="2026-02-24 00:17:19 +0000 UTC" firstStartedPulling="2026-02-24 00:17:19.850241961 +0000 UTC m=+696.761104594" lastFinishedPulling="2026-02-24 00:17:28.372161215 +0000 UTC m=+705.283023848" observedRunningTime="2026-02-24 00:17:29.368911715 +0000 UTC m=+706.279774388" watchObservedRunningTime="2026-02-24 00:17:29.36945458 +0000 UTC m=+706.280317213" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.404370 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" podStartSLOduration=1.931977219 podStartE2EDuration="12.404333504s" podCreationTimestamp="2026-02-24 00:17:17 +0000 UTC" firstStartedPulling="2026-02-24 00:17:17.915645981 +0000 UTC m=+694.826508614" lastFinishedPulling="2026-02-24 00:17:28.388002266 +0000 UTC m=+705.298864899" observedRunningTime="2026-02-24 00:17:29.403670926 +0000 UTC m=+706.314533569" watchObservedRunningTime="2026-02-24 00:17:29.404333504 +0000 UTC m=+706.315196147" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.787739 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_95fdb953-937c-4479-b253-8eb4ba52ee12/manage-dockerfile/0.log" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.788389 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943535 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-build-blob-cache\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943585 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-system-configs\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943622 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-node-pullsecrets\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943651 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zc47\" (UniqueName: \"kubernetes.io/projected/95fdb953-937c-4479-b253-8eb4ba52ee12-kube-api-access-6zc47\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943693 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-ca-bundles\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943738 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-buildcachedir\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943775 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-run\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943794 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-root\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943813 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-buildworkdir\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943860 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-proxy-ca-bundles\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943898 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-push\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.943934 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-pull\") pod \"95fdb953-937c-4479-b253-8eb4ba52ee12\" (UID: \"95fdb953-937c-4479-b253-8eb4ba52ee12\") " Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.945711 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.946153 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.946210 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.947389 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.947895 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.947921 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.950404 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.951112 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fdb953-937c-4479-b253-8eb4ba52ee12-kube-api-access-6zc47" (OuterVolumeSpecName: "kube-api-access-6zc47") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "kube-api-access-6zc47". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.953390 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.953467 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.953791 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-pull" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-pull") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "builder-dockercfg-6fxlg-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.958638 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: I0224 00:17:29.959273 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-push" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-push") pod "95fdb953-937c-4479-b253-8eb4ba52ee12" (UID: "95fdb953-937c-4479-b253-8eb4ba52ee12"). InnerVolumeSpecName "builder-dockercfg-6fxlg-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:17:29 crc kubenswrapper[4756]: W0224 00:17:29.959587 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cf684de_fe48_44ba_b593_9e840c802ea8.slice/crio-d6593148869e2e8b3774d04f11c5f22cfd2f17ad50c51fb27e932b7fe3af06e1 WatchSource:0}: Error finding container d6593148869e2e8b3774d04f11c5f22cfd2f17ad50c51fb27e932b7fe3af06e1: Status 404 returned error can't find the container with id d6593148869e2e8b3774d04f11c5f22cfd2f17ad50c51fb27e932b7fe3af06e1 Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.045823 4756 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046253 4756 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046271 4756 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046282 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zc47\" (UniqueName: \"kubernetes.io/projected/95fdb953-937c-4479-b253-8eb4ba52ee12-kube-api-access-6zc47\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046294 4756 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046305 4756 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/95fdb953-937c-4479-b253-8eb4ba52ee12-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046315 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046326 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046336 4756 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/95fdb953-937c-4479-b253-8eb4ba52ee12-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046347 4756 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95fdb953-937c-4479-b253-8eb4ba52ee12-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046357 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.046367 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/95fdb953-937c-4479-b253-8eb4ba52ee12-builder-dockercfg-6fxlg-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.339154 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_95fdb953-937c-4479-b253-8eb4ba52ee12/manage-dockerfile/0.log" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.339218 4756 generic.go:334] "Generic (PLEG): container finished" podID="95fdb953-937c-4479-b253-8eb4ba52ee12" containerID="05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68" exitCode=1 Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.339281 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"95fdb953-937c-4479-b253-8eb4ba52ee12","Type":"ContainerDied","Data":"05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68"} Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.339290 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.339319 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"95fdb953-937c-4479-b253-8eb4ba52ee12","Type":"ContainerDied","Data":"bf89f618812ce2d8278a41fcd9a981a2fde91b894212d2039756e250d7ce429d"} Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.339343 4756 scope.go:117] "RemoveContainer" containerID="05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.340969 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"1cf684de-fe48-44ba-b593-9e840c802ea8","Type":"ContainerStarted","Data":"ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9"} Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.341005 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"1cf684de-fe48-44ba-b593-9e840c802ea8","Type":"ContainerStarted","Data":"d6593148869e2e8b3774d04f11c5f22cfd2f17ad50c51fb27e932b7fe3af06e1"} Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.370911 4756 scope.go:117] "RemoveContainer" containerID="05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68" Feb 24 00:17:30 crc kubenswrapper[4756]: E0224 00:17:30.371568 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68\": container with ID starting with 05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68 not found: ID does not exist" containerID="05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.371629 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68"} err="failed to get container status \"05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68\": rpc error: code = NotFound desc = could not find container \"05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68\": container with ID starting with 05d1d5cb6c95e7f7262ee4ab284e1ff447f48bf33b0c80c1fa75bb745cb9df68 not found: ID does not exist" Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.417790 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:17:30 crc kubenswrapper[4756]: I0224 00:17:30.424008 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Feb 24 00:17:30 crc kubenswrapper[4756]: E0224 00:17:30.433471 4756 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=3808825962168627305, SKID=, AKID=92:1A:89:11:71:52:23:1E:EE:1B:2B:03:E0:6A:89:C1:91:7A:4B:10 failed: x509: certificate signed by unknown authority" Feb 24 00:17:31 crc kubenswrapper[4756]: I0224 00:17:31.286733 4756 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="11ba1f74-b3f8-495d-bba1-3c710f853a4b" containerName="elasticsearch" probeResult="failure" output=< Feb 24 00:17:31 crc kubenswrapper[4756]: {"timestamp": "2026-02-24T00:17:31+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 24 00:17:31 crc kubenswrapper[4756]: > Feb 24 00:17:31 crc kubenswrapper[4756]: I0224 00:17:31.464682 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:17:31 crc kubenswrapper[4756]: I0224 00:17:31.842215 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fdb953-937c-4479-b253-8eb4ba52ee12" path="/var/lib/kubelet/pods/95fdb953-937c-4479-b253-8eb4ba52ee12/volumes" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.359290 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-2-build" podUID="1cf684de-fe48-44ba-b593-9e840c802ea8" containerName="git-clone" containerID="cri-o://ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9" gracePeriod=30 Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.797756 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_1cf684de-fe48-44ba-b593-9e840c802ea8/git-clone/0.log" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.798454 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.887851 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-build-blob-cache\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.887919 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbcms\" (UniqueName: \"kubernetes.io/projected/1cf684de-fe48-44ba-b593-9e840c802ea8-kube-api-access-gbcms\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.887979 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-run\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888008 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-pull\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888050 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-buildworkdir\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888094 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-buildcachedir\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888136 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-system-configs\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888182 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-root\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888213 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-push\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888236 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-node-pullsecrets\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888272 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-proxy-ca-bundles\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888298 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-ca-bundles\") pod \"1cf684de-fe48-44ba-b593-9e840c802ea8\" (UID: \"1cf684de-fe48-44ba-b593-9e840c802ea8\") " Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888330 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888440 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888505 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888550 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.888595 4756 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.889052 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.889281 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.889295 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.889333 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.889602 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.900195 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-pull" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-pull") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "builder-dockercfg-6fxlg-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.907429 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf684de-fe48-44ba-b593-9e840c802ea8-kube-api-access-gbcms" (OuterVolumeSpecName: "kube-api-access-gbcms") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "kube-api-access-gbcms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.907582 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-push" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-push") pod "1cf684de-fe48-44ba-b593-9e840c802ea8" (UID: "1cf684de-fe48-44ba-b593-9e840c802ea8"). InnerVolumeSpecName "builder-dockercfg-6fxlg-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.990515 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.990939 4756 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1cf684de-fe48-44ba-b593-9e840c802ea8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991028 4756 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991130 4756 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991202 4756 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991269 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbcms\" (UniqueName: \"kubernetes.io/projected/1cf684de-fe48-44ba-b593-9e840c802ea8-kube-api-access-gbcms\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991346 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991416 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/1cf684de-fe48-44ba-b593-9e840c802ea8-builder-dockercfg-6fxlg-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991503 4756 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991588 4756 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1cf684de-fe48-44ba-b593-9e840c802ea8-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:32 crc kubenswrapper[4756]: I0224 00:17:32.991702 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1cf684de-fe48-44ba-b593-9e840c802ea8-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.367536 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_1cf684de-fe48-44ba-b593-9e840c802ea8/git-clone/0.log" Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.367597 4756 generic.go:334] "Generic (PLEG): container finished" podID="1cf684de-fe48-44ba-b593-9e840c802ea8" containerID="ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9" exitCode=1 Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.367640 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"1cf684de-fe48-44ba-b593-9e840c802ea8","Type":"ContainerDied","Data":"ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9"} Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.367677 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"1cf684de-fe48-44ba-b593-9e840c802ea8","Type":"ContainerDied","Data":"d6593148869e2e8b3774d04f11c5f22cfd2f17ad50c51fb27e932b7fe3af06e1"} Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.367701 4756 scope.go:117] "RemoveContainer" containerID="ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9" Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.367861 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.400913 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.404437 4756 scope.go:117] "RemoveContainer" containerID="ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9" Feb 24 00:17:33 crc kubenswrapper[4756]: E0224 00:17:33.405040 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9\": container with ID starting with ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9 not found: ID does not exist" containerID="ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9" Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.405145 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9"} err="failed to get container status \"ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9\": rpc error: code = NotFound desc = could not find container \"ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9\": container with ID starting with ce5ac2150e108380824f72e9283744f8228bc85681261665657a7e1968ca38f9 not found: ID does not exist" Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.407895 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Feb 24 00:17:33 crc kubenswrapper[4756]: I0224 00:17:33.841760 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf684de-fe48-44ba-b593-9e840c802ea8" path="/var/lib/kubelet/pods/1cf684de-fe48-44ba-b593-9e840c802ea8/volumes" Feb 24 00:17:35 crc kubenswrapper[4756]: I0224 00:17:35.921475 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-669c8"] Feb 24 00:17:35 crc kubenswrapper[4756]: E0224 00:17:35.922237 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fdb953-937c-4479-b253-8eb4ba52ee12" containerName="manage-dockerfile" Feb 24 00:17:35 crc kubenswrapper[4756]: I0224 00:17:35.922256 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fdb953-937c-4479-b253-8eb4ba52ee12" containerName="manage-dockerfile" Feb 24 00:17:35 crc kubenswrapper[4756]: E0224 00:17:35.922291 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf684de-fe48-44ba-b593-9e840c802ea8" containerName="git-clone" Feb 24 00:17:35 crc kubenswrapper[4756]: I0224 00:17:35.922302 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf684de-fe48-44ba-b593-9e840c802ea8" containerName="git-clone" Feb 24 00:17:35 crc kubenswrapper[4756]: I0224 00:17:35.922444 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf684de-fe48-44ba-b593-9e840c802ea8" containerName="git-clone" Feb 24 00:17:35 crc kubenswrapper[4756]: I0224 00:17:35.922472 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fdb953-937c-4479-b253-8eb4ba52ee12" containerName="manage-dockerfile" Feb 24 00:17:35 crc kubenswrapper[4756]: I0224 00:17:35.923138 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-669c8" Feb 24 00:17:35 crc kubenswrapper[4756]: I0224 00:17:35.925373 4756 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2zsvb" Feb 24 00:17:35 crc kubenswrapper[4756]: I0224 00:17:35.945847 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-669c8"] Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.031445 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzpd4\" (UniqueName: \"kubernetes.io/projected/14aba63c-c480-47ac-ae8d-568569f4d558-kube-api-access-fzpd4\") pod \"cert-manager-545d4d4674-669c8\" (UID: \"14aba63c-c480-47ac-ae8d-568569f4d558\") " pod="cert-manager/cert-manager-545d4d4674-669c8" Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.031543 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14aba63c-c480-47ac-ae8d-568569f4d558-bound-sa-token\") pod \"cert-manager-545d4d4674-669c8\" (UID: \"14aba63c-c480-47ac-ae8d-568569f4d558\") " pod="cert-manager/cert-manager-545d4d4674-669c8" Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.133534 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14aba63c-c480-47ac-ae8d-568569f4d558-bound-sa-token\") pod \"cert-manager-545d4d4674-669c8\" (UID: \"14aba63c-c480-47ac-ae8d-568569f4d558\") " pod="cert-manager/cert-manager-545d4d4674-669c8" Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.133666 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzpd4\" (UniqueName: \"kubernetes.io/projected/14aba63c-c480-47ac-ae8d-568569f4d558-kube-api-access-fzpd4\") pod \"cert-manager-545d4d4674-669c8\" (UID: \"14aba63c-c480-47ac-ae8d-568569f4d558\") " pod="cert-manager/cert-manager-545d4d4674-669c8" Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.153302 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzpd4\" (UniqueName: \"kubernetes.io/projected/14aba63c-c480-47ac-ae8d-568569f4d558-kube-api-access-fzpd4\") pod \"cert-manager-545d4d4674-669c8\" (UID: \"14aba63c-c480-47ac-ae8d-568569f4d558\") " pod="cert-manager/cert-manager-545d4d4674-669c8" Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.165201 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/14aba63c-c480-47ac-ae8d-568569f4d558-bound-sa-token\") pod \"cert-manager-545d4d4674-669c8\" (UID: \"14aba63c-c480-47ac-ae8d-568569f4d558\") " pod="cert-manager/cert-manager-545d4d4674-669c8" Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.240108 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-669c8" Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.461266 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-669c8"] Feb 24 00:17:36 crc kubenswrapper[4756]: W0224 00:17:36.475931 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14aba63c_c480_47ac_ae8d_568569f4d558.slice/crio-da9e0f805cefc8ba4efe44d04eba587e9bd0f878f5dba1cc56e69352b66799aa WatchSource:0}: Error finding container da9e0f805cefc8ba4efe44d04eba587e9bd0f878f5dba1cc56e69352b66799aa: Status 404 returned error can't find the container with id da9e0f805cefc8ba4efe44d04eba587e9bd0f878f5dba1cc56e69352b66799aa Feb 24 00:17:36 crc kubenswrapper[4756]: I0224 00:17:36.911281 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 24 00:17:37 crc kubenswrapper[4756]: I0224 00:17:37.403042 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-669c8" event={"ID":"14aba63c-c480-47ac-ae8d-568569f4d558","Type":"ContainerStarted","Data":"7b66451ac2feba939177c077d03fa674f082215f6e4ea52ed803010927b7f0de"} Feb 24 00:17:37 crc kubenswrapper[4756]: I0224 00:17:37.403464 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-669c8" event={"ID":"14aba63c-c480-47ac-ae8d-568569f4d558","Type":"ContainerStarted","Data":"da9e0f805cefc8ba4efe44d04eba587e9bd0f878f5dba1cc56e69352b66799aa"} Feb 24 00:17:37 crc kubenswrapper[4756]: I0224 00:17:37.427407 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-669c8" podStartSLOduration=2.427385959 podStartE2EDuration="2.427385959s" podCreationTimestamp="2026-02-24 00:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-24 00:17:37.425468495 +0000 UTC m=+714.336331128" watchObservedRunningTime="2026-02-24 00:17:37.427385959 +0000 UTC m=+714.338248602" Feb 24 00:17:37 crc kubenswrapper[4756]: I0224 00:17:37.468859 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-gxk5z" Feb 24 00:17:42 crc kubenswrapper[4756]: I0224 00:17:42.949850 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:17:42 crc kubenswrapper[4756]: I0224 00:17:42.951738 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:42 crc kubenswrapper[4756]: I0224 00:17:42.953779 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6fxlg" Feb 24 00:17:42 crc kubenswrapper[4756]: I0224 00:17:42.954636 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-ca" Feb 24 00:17:42 crc kubenswrapper[4756]: I0224 00:17:42.955049 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-sys-config" Feb 24 00:17:42 crc kubenswrapper[4756]: I0224 00:17:42.955163 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-3-global-ca" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.027639 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.136639 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.136684 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.136715 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.136806 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.137023 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.137185 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2qq\" (UniqueName: \"kubernetes.io/projected/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-kube-api-access-rb2qq\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.137282 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.137339 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.137375 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.137409 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.137542 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.137577 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239031 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239150 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239193 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239229 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239251 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239275 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb2qq\" (UniqueName: \"kubernetes.io/projected/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-kube-api-access-rb2qq\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239315 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239349 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239373 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239397 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239431 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239453 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239648 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-blob-cache\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.239803 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-run\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.240286 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-system-configs\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.240285 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildcachedir\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.240420 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-root\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.240530 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-node-pullsecrets\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.240786 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-proxy-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.240913 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildworkdir\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.241352 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-ca-bundles\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.249777 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.260516 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.270894 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb2qq\" (UniqueName: \"kubernetes.io/projected/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-kube-api-access-rb2qq\") pod \"service-telemetry-operator-3-build\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.283574 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:43 crc kubenswrapper[4756]: I0224 00:17:43.591477 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:17:44 crc kubenswrapper[4756]: I0224 00:17:44.462258 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a4be7597-99ef-4bce-94b4-c35b1f2cbb38","Type":"ContainerStarted","Data":"01de38cbba797cdd307e485d067c36a772e4c9df418cf1e256e55e6afff43ef7"} Feb 24 00:17:45 crc kubenswrapper[4756]: I0224 00:17:45.471568 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a4be7597-99ef-4bce-94b4-c35b1f2cbb38","Type":"ContainerStarted","Data":"9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397"} Feb 24 00:17:46 crc kubenswrapper[4756]: E0224 00:17:46.544190 4756 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=3808825962168627305, SKID=, AKID=92:1A:89:11:71:52:23:1E:EE:1B:2B:03:E0:6A:89:C1:91:7A:4B:10 failed: x509: certificate signed by unknown authority" Feb 24 00:17:47 crc kubenswrapper[4756]: I0224 00:17:47.579172 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:17:47 crc kubenswrapper[4756]: I0224 00:17:47.579547 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-3-build" podUID="a4be7597-99ef-4bce-94b4-c35b1f2cbb38" containerName="git-clone" containerID="cri-o://9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397" gracePeriod=30 Feb 24 00:17:47 crc kubenswrapper[4756]: I0224 00:17:47.938868 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_a4be7597-99ef-4bce-94b4-c35b1f2cbb38/git-clone/0.log" Feb 24 00:17:47 crc kubenswrapper[4756]: I0224 00:17:47.939315 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.113722 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-node-pullsecrets\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.113827 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-blob-cache\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.113891 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb2qq\" (UniqueName: \"kubernetes.io/projected/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-kube-api-access-rb2qq\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.113903 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.113943 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-pull\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114100 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-system-configs\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114166 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildcachedir\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114217 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-root\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114239 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-ca-bundles\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114295 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-push\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114328 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114354 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildworkdir\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114399 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114492 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-proxy-ca-bundles\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114525 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-run\") pod \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\" (UID: \"a4be7597-99ef-4bce-94b4-c35b1f2cbb38\") " Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114789 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.114903 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115004 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115191 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115326 4756 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115341 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115353 4756 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115368 4756 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115387 4756 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115399 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115411 4756 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.115798 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.120199 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-push" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-push") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "builder-dockercfg-6fxlg-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.120266 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-kube-api-access-rb2qq" (OuterVolumeSpecName: "kube-api-access-rb2qq") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "kube-api-access-rb2qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.123289 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-pull" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-pull") pod "a4be7597-99ef-4bce-94b4-c35b1f2cbb38" (UID: "a4be7597-99ef-4bce-94b4-c35b1f2cbb38"). InnerVolumeSpecName "builder-dockercfg-6fxlg-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.217194 4756 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.217225 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.217239 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb2qq\" (UniqueName: \"kubernetes.io/projected/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-kube-api-access-rb2qq\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.217249 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.217260 4756 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.217273 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/a4be7597-99ef-4bce-94b4-c35b1f2cbb38-builder-dockercfg-6fxlg-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.497844 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-3-build_a4be7597-99ef-4bce-94b4-c35b1f2cbb38/git-clone/0.log" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.497896 4756 generic.go:334] "Generic (PLEG): container finished" podID="a4be7597-99ef-4bce-94b4-c35b1f2cbb38" containerID="9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397" exitCode=1 Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.497936 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a4be7597-99ef-4bce-94b4-c35b1f2cbb38","Type":"ContainerDied","Data":"9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397"} Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.497971 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-3-build" event={"ID":"a4be7597-99ef-4bce-94b4-c35b1f2cbb38","Type":"ContainerDied","Data":"01de38cbba797cdd307e485d067c36a772e4c9df418cf1e256e55e6afff43ef7"} Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.497994 4756 scope.go:117] "RemoveContainer" containerID="9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.498195 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-3-build" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.525896 4756 scope.go:117] "RemoveContainer" containerID="9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397" Feb 24 00:17:48 crc kubenswrapper[4756]: E0224 00:17:48.526593 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397\": container with ID starting with 9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397 not found: ID does not exist" containerID="9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.526677 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397"} err="failed to get container status \"9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397\": rpc error: code = NotFound desc = could not find container \"9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397\": container with ID starting with 9cbd209101ee84efce76f5a6fa84b8e1633a21bea47c3a1187954dbf801d4397 not found: ID does not exist" Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.548129 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:17:48 crc kubenswrapper[4756]: I0224 00:17:48.554757 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-3-build"] Feb 24 00:17:49 crc kubenswrapper[4756]: I0224 00:17:49.841548 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4be7597-99ef-4bce-94b4-c35b1f2cbb38" path="/var/lib/kubelet/pods/a4be7597-99ef-4bce-94b4-c35b1f2cbb38/volumes" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.133422 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:17:59 crc kubenswrapper[4756]: E0224 00:17:59.134486 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4be7597-99ef-4bce-94b4-c35b1f2cbb38" containerName="git-clone" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.134504 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4be7597-99ef-4bce-94b4-c35b1f2cbb38" containerName="git-clone" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.134653 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4be7597-99ef-4bce-94b4-c35b1f2cbb38" containerName="git-clone" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.136552 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.138576 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-sys-config" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.138768 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-ca" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.140407 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-4-global-ca" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.141245 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6fxlg" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.163009 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.305548 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.305864 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306051 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306208 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306290 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306357 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306408 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306446 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306468 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306577 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306618 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pzgg\" (UniqueName: \"kubernetes.io/projected/21d73dac-d1a5-4075-b615-a84d6089097d-kube-api-access-5pzgg\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.306669 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407460 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407518 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407547 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407564 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pzgg\" (UniqueName: \"kubernetes.io/projected/21d73dac-d1a5-4075-b615-a84d6089097d-kube-api-access-5pzgg\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407588 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407620 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407638 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407658 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407674 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407694 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407716 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407741 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407851 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-node-pullsecrets\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.407988 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-buildworkdir\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.408329 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-buildcachedir\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.408496 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-root\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.408598 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-proxy-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.408832 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-run\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.408891 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-build-blob-cache\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.409143 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-system-configs\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.409457 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-ca-bundles\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.413645 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.414264 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.432621 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pzgg\" (UniqueName: \"kubernetes.io/projected/21d73dac-d1a5-4075-b615-a84d6089097d-kube-api-access-5pzgg\") pod \"service-telemetry-operator-4-build\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.508327 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:17:59 crc kubenswrapper[4756]: I0224 00:17:59.987565 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:18:00 crc kubenswrapper[4756]: I0224 00:18:00.592786 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"21d73dac-d1a5-4075-b615-a84d6089097d","Type":"ContainerStarted","Data":"07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917"} Feb 24 00:18:00 crc kubenswrapper[4756]: I0224 00:18:00.592874 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"21d73dac-d1a5-4075-b615-a84d6089097d","Type":"ContainerStarted","Data":"a4d8d28cfa90eeadfdb4a530105dceb6e89b69a2743aa49b3397c214b5b09a10"} Feb 24 00:18:00 crc kubenswrapper[4756]: E0224 00:18:00.648167 4756 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=3808825962168627305, SKID=, AKID=92:1A:89:11:71:52:23:1E:EE:1B:2B:03:E0:6A:89:C1:91:7A:4B:10 failed: x509: certificate signed by unknown authority" Feb 24 00:18:01 crc kubenswrapper[4756]: I0224 00:18:01.682574 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:18:02 crc kubenswrapper[4756]: I0224 00:18:02.611248 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-4-build" podUID="21d73dac-d1a5-4075-b615-a84d6089097d" containerName="git-clone" containerID="cri-o://07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917" gracePeriod=30 Feb 24 00:18:02 crc kubenswrapper[4756]: I0224 00:18:02.987735 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_21d73dac-d1a5-4075-b615-a84d6089097d/git-clone/0.log" Feb 24 00:18:02 crc kubenswrapper[4756]: I0224 00:18:02.988181 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.071809 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-system-configs\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.071880 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-pull\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.071911 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-buildworkdir\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.071941 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-run\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.071978 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pzgg\" (UniqueName: \"kubernetes.io/projected/21d73dac-d1a5-4075-b615-a84d6089097d-kube-api-access-5pzgg\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072021 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-root\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072042 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-node-pullsecrets\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072136 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-push\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072182 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-proxy-ca-bundles\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072202 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-build-blob-cache\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072228 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-ca-bundles\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072251 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-buildcachedir\") pod \"21d73dac-d1a5-4075-b615-a84d6089097d\" (UID: \"21d73dac-d1a5-4075-b615-a84d6089097d\") " Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072478 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072497 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.072785 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.073183 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.073165 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.073224 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.073291 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.073439 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.073565 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.077840 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-push" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-push") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "builder-dockercfg-6fxlg-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.079278 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d73dac-d1a5-4075-b615-a84d6089097d-kube-api-access-5pzgg" (OuterVolumeSpecName: "kube-api-access-5pzgg") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "kube-api-access-5pzgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.079455 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-pull" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-pull") pod "21d73dac-d1a5-4075-b615-a84d6089097d" (UID: "21d73dac-d1a5-4075-b615-a84d6089097d"). InnerVolumeSpecName "builder-dockercfg-6fxlg-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173718 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173762 4756 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173775 4756 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173783 4756 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173797 4756 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173806 4756 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/21d73dac-d1a5-4075-b615-a84d6089097d-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173814 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/21d73dac-d1a5-4075-b615-a84d6089097d-builder-dockercfg-6fxlg-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173826 4756 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173835 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173843 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pzgg\" (UniqueName: \"kubernetes.io/projected/21d73dac-d1a5-4075-b615-a84d6089097d-kube-api-access-5pzgg\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173851 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/21d73dac-d1a5-4075-b615-a84d6089097d-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.173859 4756 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/21d73dac-d1a5-4075-b615-a84d6089097d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.625923 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-4-build_21d73dac-d1a5-4075-b615-a84d6089097d/git-clone/0.log" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.626002 4756 generic.go:334] "Generic (PLEG): container finished" podID="21d73dac-d1a5-4075-b615-a84d6089097d" containerID="07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917" exitCode=1 Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.626081 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"21d73dac-d1a5-4075-b615-a84d6089097d","Type":"ContainerDied","Data":"07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917"} Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.626126 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-4-build" event={"ID":"21d73dac-d1a5-4075-b615-a84d6089097d","Type":"ContainerDied","Data":"a4d8d28cfa90eeadfdb4a530105dceb6e89b69a2743aa49b3397c214b5b09a10"} Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.626151 4756 scope.go:117] "RemoveContainer" containerID="07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.626508 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-4-build" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.653090 4756 scope.go:117] "RemoveContainer" containerID="07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917" Feb 24 00:18:03 crc kubenswrapper[4756]: E0224 00:18:03.654127 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917\": container with ID starting with 07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917 not found: ID does not exist" containerID="07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.654190 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917"} err="failed to get container status \"07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917\": rpc error: code = NotFound desc = could not find container \"07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917\": container with ID starting with 07341c884feebbda977d17851c67a9059fb3d587833e6f9e7a816d5239dfa917 not found: ID does not exist" Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.676933 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.681639 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-4-build"] Feb 24 00:18:03 crc kubenswrapper[4756]: I0224 00:18:03.845556 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d73dac-d1a5-4075-b615-a84d6089097d" path="/var/lib/kubelet/pods/21d73dac-d1a5-4075-b615-a84d6089097d/volumes" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.102244 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:18:13 crc kubenswrapper[4756]: E0224 00:18:13.103320 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d73dac-d1a5-4075-b615-a84d6089097d" containerName="git-clone" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.103338 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d73dac-d1a5-4075-b615-a84d6089097d" containerName="git-clone" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.103496 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d73dac-d1a5-4075-b615-a84d6089097d" containerName="git-clone" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.105157 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.107330 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-global-ca" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.107404 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-sys-config" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.108613 4756 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-6fxlg" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.108898 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-5-ca" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.123828 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.133709 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.133786 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.133826 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.133878 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28sx8\" (UniqueName: \"kubernetes.io/projected/c5f9c636-15f9-4d68-b786-3921c81b1518-kube-api-access-28sx8\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.133914 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.133959 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.134004 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.134035 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.134089 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.134133 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.134168 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.134199 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235327 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235448 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235538 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235575 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235720 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-buildcachedir\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235752 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235820 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-node-pullsecrets\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.235982 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.236025 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.236130 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.236162 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.236179 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.236272 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28sx8\" (UniqueName: \"kubernetes.io/projected/c5f9c636-15f9-4d68-b786-3921c81b1518-kube-api-access-28sx8\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.236293 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-system-configs\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.236545 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-root\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.236617 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-proxy-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.237030 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-buildworkdir\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.237283 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-build-blob-cache\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.237396 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-run\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.237816 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-ca-bundles\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.244616 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-push\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.253032 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-pull\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.253268 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28sx8\" (UniqueName: \"kubernetes.io/projected/c5f9c636-15f9-4d68-b786-3921c81b1518-kube-api-access-28sx8\") pod \"service-telemetry-operator-5-build\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.431635 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:13 crc kubenswrapper[4756]: I0224 00:18:13.867570 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:18:14 crc kubenswrapper[4756]: I0224 00:18:14.712773 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c5f9c636-15f9-4d68-b786-3921c81b1518","Type":"ContainerStarted","Data":"5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0"} Feb 24 00:18:14 crc kubenswrapper[4756]: I0224 00:18:14.713225 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c5f9c636-15f9-4d68-b786-3921c81b1518","Type":"ContainerStarted","Data":"3afa8c1081129c345e1a82e2a14d1a491f0f781cf31d967ca7174a5b59210d5d"} Feb 24 00:18:14 crc kubenswrapper[4756]: E0224 00:18:14.783826 4756 server.go:309] "Unable to authenticate the request due to an error" err="verifying certificate SN=3808825962168627305, SKID=, AKID=92:1A:89:11:71:52:23:1E:EE:1B:2B:03:E0:6A:89:C1:91:7A:4B:10 failed: x509: certificate signed by unknown authority" Feb 24 00:18:15 crc kubenswrapper[4756]: I0224 00:18:15.162635 4756 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 24 00:18:15 crc kubenswrapper[4756]: I0224 00:18:15.811639 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:18:16 crc kubenswrapper[4756]: I0224 00:18:16.726394 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-5-build" podUID="c5f9c636-15f9-4d68-b786-3921c81b1518" containerName="git-clone" containerID="cri-o://5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0" gracePeriod=30 Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.053973 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_c5f9c636-15f9-4d68-b786-3921c81b1518/git-clone/0.log" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.054111 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.196183 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-system-configs\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.196726 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-proxy-ca-bundles\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.196771 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-build-blob-cache\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.196818 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-root\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.196851 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28sx8\" (UniqueName: \"kubernetes.io/projected/c5f9c636-15f9-4d68-b786-3921c81b1518-kube-api-access-28sx8\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.196888 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-run\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.196945 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-ca-bundles\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197018 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-pull\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197049 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-buildworkdir\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197101 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-push\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197147 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-buildcachedir\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197173 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-node-pullsecrets\") pod \"c5f9c636-15f9-4d68-b786-3921c81b1518\" (UID: \"c5f9c636-15f9-4d68-b786-3921c81b1518\") " Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197251 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197526 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197550 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-root\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197617 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197663 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197771 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.197987 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.198160 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.198249 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.198790 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.202750 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-push" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-push") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "builder-dockercfg-6fxlg-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.203008 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f9c636-15f9-4d68-b786-3921c81b1518-kube-api-access-28sx8" (OuterVolumeSpecName: "kube-api-access-28sx8") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "kube-api-access-28sx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.203181 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-pull" (OuterVolumeSpecName: "builder-dockercfg-6fxlg-pull") pod "c5f9c636-15f9-4d68-b786-3921c81b1518" (UID: "c5f9c636-15f9-4d68-b786-3921c81b1518"). InnerVolumeSpecName "builder-dockercfg-6fxlg-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298359 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-pull\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-pull\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298420 4756 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-buildworkdir\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298437 4756 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-6fxlg-push\" (UniqueName: \"kubernetes.io/secret/c5f9c636-15f9-4d68-b786-3921c81b1518-builder-dockercfg-6fxlg-push\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298450 4756 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-buildcachedir\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298463 4756 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c5f9c636-15f9-4d68-b786-3921c81b1518-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298474 4756 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-system-configs\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298486 4756 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298497 4756 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-build-blob-cache\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298510 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28sx8\" (UniqueName: \"kubernetes.io/projected/c5f9c636-15f9-4d68-b786-3921c81b1518-kube-api-access-28sx8\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298520 4756 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c5f9c636-15f9-4d68-b786-3921c81b1518-container-storage-run\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.298531 4756 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c5f9c636-15f9-4d68-b786-3921c81b1518-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.736653 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-5-build_c5f9c636-15f9-4d68-b786-3921c81b1518/git-clone/0.log" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.737027 4756 generic.go:334] "Generic (PLEG): container finished" podID="c5f9c636-15f9-4d68-b786-3921c81b1518" containerID="5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0" exitCode=1 Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.737088 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c5f9c636-15f9-4d68-b786-3921c81b1518","Type":"ContainerDied","Data":"5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0"} Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.737130 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-5-build" event={"ID":"c5f9c636-15f9-4d68-b786-3921c81b1518","Type":"ContainerDied","Data":"3afa8c1081129c345e1a82e2a14d1a491f0f781cf31d967ca7174a5b59210d5d"} Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.737149 4756 scope.go:117] "RemoveContainer" containerID="5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.737204 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-5-build" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.764602 4756 scope.go:117] "RemoveContainer" containerID="5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0" Feb 24 00:18:17 crc kubenswrapper[4756]: E0224 00:18:17.765319 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0\": container with ID starting with 5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0 not found: ID does not exist" containerID="5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.765371 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0"} err="failed to get container status \"5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0\": rpc error: code = NotFound desc = could not find container \"5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0\": container with ID starting with 5322907b5feb43fdc98a8a9ebaa4ac30231c400ba943b0e15c3e9b9902a34ba0 not found: ID does not exist" Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.778793 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.783535 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-5-build"] Feb 24 00:18:17 crc kubenswrapper[4756]: I0224 00:18:17.848216 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f9c636-15f9-4d68-b786-3921c81b1518" path="/var/lib/kubelet/pods/c5f9c636-15f9-4d68-b786-3921c81b1518/volumes" Feb 24 00:18:52 crc kubenswrapper[4756]: I0224 00:18:52.711468 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:18:52 crc kubenswrapper[4756]: I0224 00:18:52.712303 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.544055 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tmxcd/must-gather-zk8st"] Feb 24 00:18:53 crc kubenswrapper[4756]: E0224 00:18:53.544460 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f9c636-15f9-4d68-b786-3921c81b1518" containerName="git-clone" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.544486 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f9c636-15f9-4d68-b786-3921c81b1518" containerName="git-clone" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.544638 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f9c636-15f9-4d68-b786-3921c81b1518" containerName="git-clone" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.545583 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.551863 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tmxcd"/"openshift-service-ca.crt" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.551987 4756 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tmxcd"/"kube-root-ca.crt" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.566249 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tmxcd/must-gather-zk8st"] Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.727995 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-must-gather-output\") pod \"must-gather-zk8st\" (UID: \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\") " pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.728112 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-kube-api-access-ph7lk\") pod \"must-gather-zk8st\" (UID: \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\") " pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.829877 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-kube-api-access-ph7lk\") pod \"must-gather-zk8st\" (UID: \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\") " pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.830025 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-must-gather-output\") pod \"must-gather-zk8st\" (UID: \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\") " pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.830491 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-must-gather-output\") pod \"must-gather-zk8st\" (UID: \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\") " pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.858188 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-kube-api-access-ph7lk\") pod \"must-gather-zk8st\" (UID: \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\") " pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:18:53 crc kubenswrapper[4756]: I0224 00:18:53.863606 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:18:54 crc kubenswrapper[4756]: I0224 00:18:54.315311 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tmxcd/must-gather-zk8st"] Feb 24 00:18:55 crc kubenswrapper[4756]: I0224 00:18:55.056259 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tmxcd/must-gather-zk8st" event={"ID":"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6","Type":"ContainerStarted","Data":"84ca2a2200098dbcfd7a7502bfb5e127fa266acd4934c5f9017350ef53acf076"} Feb 24 00:19:03 crc kubenswrapper[4756]: I0224 00:19:03.131898 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tmxcd/must-gather-zk8st" event={"ID":"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6","Type":"ContainerStarted","Data":"3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8"} Feb 24 00:19:03 crc kubenswrapper[4756]: I0224 00:19:03.132457 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tmxcd/must-gather-zk8st" event={"ID":"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6","Type":"ContainerStarted","Data":"26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850"} Feb 24 00:19:03 crc kubenswrapper[4756]: I0224 00:19:03.150560 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tmxcd/must-gather-zk8st" podStartSLOduration=2.511936303 podStartE2EDuration="10.150535708s" podCreationTimestamp="2026-02-24 00:18:53 +0000 UTC" firstStartedPulling="2026-02-24 00:18:54.319562941 +0000 UTC m=+791.230425594" lastFinishedPulling="2026-02-24 00:19:01.958162366 +0000 UTC m=+798.869024999" observedRunningTime="2026-02-24 00:19:03.147719676 +0000 UTC m=+800.058582319" watchObservedRunningTime="2026-02-24 00:19:03.150535708 +0000 UTC m=+800.061398361" Feb 24 00:19:22 crc kubenswrapper[4756]: I0224 00:19:22.711113 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:19:22 crc kubenswrapper[4756]: I0224 00:19:22.711754 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:19:48 crc kubenswrapper[4756]: I0224 00:19:48.972960 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-ppwx6_7f127f66-2445-4615-8949-1a2e70a902c0/control-plane-machine-set-operator/0.log" Feb 24 00:19:49 crc kubenswrapper[4756]: I0224 00:19:49.147749 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dm5nw_c7d56473-d779-43ac-a1a0-2924bab188f5/kube-rbac-proxy/0.log" Feb 24 00:19:49 crc kubenswrapper[4756]: I0224 00:19:49.186967 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dm5nw_c7d56473-d779-43ac-a1a0-2924bab188f5/machine-api-operator/0.log" Feb 24 00:19:52 crc kubenswrapper[4756]: I0224 00:19:52.710938 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:19:52 crc kubenswrapper[4756]: I0224 00:19:52.711348 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:19:52 crc kubenswrapper[4756]: I0224 00:19:52.711437 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:19:52 crc kubenswrapper[4756]: I0224 00:19:52.712146 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d01c4008be0b2356d9dc9b4c088f54b7dea3397664e049fd73bc626db7cec60"} pod="openshift-machine-config-operator/machine-config-daemon-qb88h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:19:52 crc kubenswrapper[4756]: I0224 00:19:52.712201 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" containerID="cri-o://1d01c4008be0b2356d9dc9b4c088f54b7dea3397664e049fd73bc626db7cec60" gracePeriod=600 Feb 24 00:19:53 crc kubenswrapper[4756]: I0224 00:19:53.493325 4756 generic.go:334] "Generic (PLEG): container finished" podID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerID="1d01c4008be0b2356d9dc9b4c088f54b7dea3397664e049fd73bc626db7cec60" exitCode=0 Feb 24 00:19:53 crc kubenswrapper[4756]: I0224 00:19:53.493402 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerDied","Data":"1d01c4008be0b2356d9dc9b4c088f54b7dea3397664e049fd73bc626db7cec60"} Feb 24 00:19:53 crc kubenswrapper[4756]: I0224 00:19:53.494046 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerStarted","Data":"fc2ac906d4539078c314833b1bd446386b29d29b18184d97ac48c21e47011591"} Feb 24 00:19:53 crc kubenswrapper[4756]: I0224 00:19:53.494093 4756 scope.go:117] "RemoveContainer" containerID="a250b84fb5cf3d50f1c8f73096c01a386ce6306ff496adf3c239fdd204f80ec3" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.127444 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nzbp"] Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.129768 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.147424 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-catalog-content\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.147536 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-utilities\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.147583 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft58p\" (UniqueName: \"kubernetes.io/projected/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-kube-api-access-ft58p\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.218112 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nzbp"] Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.248988 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-catalog-content\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.249115 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-utilities\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.249155 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft58p\" (UniqueName: \"kubernetes.io/projected/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-kube-api-access-ft58p\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.250741 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-catalog-content\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.251838 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-utilities\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.298568 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft58p\" (UniqueName: \"kubernetes.io/projected/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-kube-api-access-ft58p\") pod \"community-operators-4nzbp\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:01 crc kubenswrapper[4756]: I0224 00:20:01.528256 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:02 crc kubenswrapper[4756]: I0224 00:20:02.062647 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nzbp"] Feb 24 00:20:02 crc kubenswrapper[4756]: I0224 00:20:02.495178 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-669c8_14aba63c-c480-47ac-ae8d-568569f4d558/cert-manager-controller/0.log" Feb 24 00:20:02 crc kubenswrapper[4756]: I0224 00:20:02.555872 4756 generic.go:334] "Generic (PLEG): container finished" podID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerID="9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07" exitCode=0 Feb 24 00:20:02 crc kubenswrapper[4756]: I0224 00:20:02.555920 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nzbp" event={"ID":"a2262628-7cb6-4042-a0ae-37d0a4ff79b8","Type":"ContainerDied","Data":"9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07"} Feb 24 00:20:02 crc kubenswrapper[4756]: I0224 00:20:02.555966 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nzbp" event={"ID":"a2262628-7cb6-4042-a0ae-37d0a4ff79b8","Type":"ContainerStarted","Data":"37cfa503bd9cb3b9a7df374f12e8c7f008c9a6b12ed474b700a390f36314ff40"} Feb 24 00:20:02 crc kubenswrapper[4756]: I0224 00:20:02.632221 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-n9tl7_f248b635-79e8-4b65-9006-0c4b074800f5/cert-manager-cainjector/0.log" Feb 24 00:20:02 crc kubenswrapper[4756]: I0224 00:20:02.708368 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-gxk5z_87943173-5ada-4bb0-8baf-d1b8fdf8ea9a/cert-manager-webhook/0.log" Feb 24 00:20:03 crc kubenswrapper[4756]: I0224 00:20:03.564545 4756 generic.go:334] "Generic (PLEG): container finished" podID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerID="d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168" exitCode=0 Feb 24 00:20:03 crc kubenswrapper[4756]: I0224 00:20:03.564662 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nzbp" event={"ID":"a2262628-7cb6-4042-a0ae-37d0a4ff79b8","Type":"ContainerDied","Data":"d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168"} Feb 24 00:20:04 crc kubenswrapper[4756]: I0224 00:20:04.575485 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nzbp" event={"ID":"a2262628-7cb6-4042-a0ae-37d0a4ff79b8","Type":"ContainerStarted","Data":"3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42"} Feb 24 00:20:04 crc kubenswrapper[4756]: I0224 00:20:04.597582 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nzbp" podStartSLOduration=2.174508118 podStartE2EDuration="3.597559199s" podCreationTimestamp="2026-02-24 00:20:01 +0000 UTC" firstStartedPulling="2026-02-24 00:20:02.557933376 +0000 UTC m=+859.468796009" lastFinishedPulling="2026-02-24 00:20:03.980984457 +0000 UTC m=+860.891847090" observedRunningTime="2026-02-24 00:20:04.596218502 +0000 UTC m=+861.507081155" watchObservedRunningTime="2026-02-24 00:20:04.597559199 +0000 UTC m=+861.508421832" Feb 24 00:20:11 crc kubenswrapper[4756]: I0224 00:20:11.528941 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:11 crc kubenswrapper[4756]: I0224 00:20:11.529320 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:11 crc kubenswrapper[4756]: I0224 00:20:11.568440 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:11 crc kubenswrapper[4756]: I0224 00:20:11.656685 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:11 crc kubenswrapper[4756]: I0224 00:20:11.801763 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nzbp"] Feb 24 00:20:13 crc kubenswrapper[4756]: I0224 00:20:13.631636 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4nzbp" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerName="registry-server" containerID="cri-o://3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42" gracePeriod=2 Feb 24 00:20:13 crc kubenswrapper[4756]: I0224 00:20:13.993408 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.158342 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-catalog-content\") pod \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.158416 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-utilities\") pod \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.158567 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft58p\" (UniqueName: \"kubernetes.io/projected/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-kube-api-access-ft58p\") pod \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\" (UID: \"a2262628-7cb6-4042-a0ae-37d0a4ff79b8\") " Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.159505 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-utilities" (OuterVolumeSpecName: "utilities") pod "a2262628-7cb6-4042-a0ae-37d0a4ff79b8" (UID: "a2262628-7cb6-4042-a0ae-37d0a4ff79b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.159686 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.168335 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-kube-api-access-ft58p" (OuterVolumeSpecName: "kube-api-access-ft58p") pod "a2262628-7cb6-4042-a0ae-37d0a4ff79b8" (UID: "a2262628-7cb6-4042-a0ae-37d0a4ff79b8"). InnerVolumeSpecName "kube-api-access-ft58p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.214150 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qjbb7"] Feb 24 00:20:14 crc kubenswrapper[4756]: E0224 00:20:14.214520 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerName="extract-content" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.214537 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerName="extract-content" Feb 24 00:20:14 crc kubenswrapper[4756]: E0224 00:20:14.214564 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerName="extract-utilities" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.214572 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerName="extract-utilities" Feb 24 00:20:14 crc kubenswrapper[4756]: E0224 00:20:14.214580 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerName="registry-server" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.214589 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerName="registry-server" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.214749 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerName="registry-server" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.215800 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.216420 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2262628-7cb6-4042-a0ae-37d0a4ff79b8" (UID: "a2262628-7cb6-4042-a0ae-37d0a4ff79b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.227543 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjbb7"] Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.262053 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft58p\" (UniqueName: \"kubernetes.io/projected/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-kube-api-access-ft58p\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.262123 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2262628-7cb6-4042-a0ae-37d0a4ff79b8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.363525 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-catalog-content\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.363885 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlfqx\" (UniqueName: \"kubernetes.io/projected/89804092-1928-4dd8-bc4e-8f78023b604e-kube-api-access-dlfqx\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.364031 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-utilities\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.465295 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-utilities\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.465654 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-catalog-content\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.465760 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlfqx\" (UniqueName: \"kubernetes.io/projected/89804092-1928-4dd8-bc4e-8f78023b604e-kube-api-access-dlfqx\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.465909 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-utilities\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.466174 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-catalog-content\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.484367 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlfqx\" (UniqueName: \"kubernetes.io/projected/89804092-1928-4dd8-bc4e-8f78023b604e-kube-api-access-dlfqx\") pod \"redhat-operators-qjbb7\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.538970 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.646108 4756 generic.go:334] "Generic (PLEG): container finished" podID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" containerID="3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42" exitCode=0 Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.646418 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nzbp" event={"ID":"a2262628-7cb6-4042-a0ae-37d0a4ff79b8","Type":"ContainerDied","Data":"3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42"} Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.646456 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nzbp" event={"ID":"a2262628-7cb6-4042-a0ae-37d0a4ff79b8","Type":"ContainerDied","Data":"37cfa503bd9cb3b9a7df374f12e8c7f008c9a6b12ed474b700a390f36314ff40"} Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.646478 4756 scope.go:117] "RemoveContainer" containerID="3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.646476 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nzbp" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.674227 4756 scope.go:117] "RemoveContainer" containerID="d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.697851 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nzbp"] Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.711054 4756 scope.go:117] "RemoveContainer" containerID="9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.716914 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4nzbp"] Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.732473 4756 scope.go:117] "RemoveContainer" containerID="3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42" Feb 24 00:20:14 crc kubenswrapper[4756]: E0224 00:20:14.733262 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42\": container with ID starting with 3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42 not found: ID does not exist" containerID="3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.733290 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42"} err="failed to get container status \"3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42\": rpc error: code = NotFound desc = could not find container \"3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42\": container with ID starting with 3d704c43b9a2985118d7beabf9045f6350d824a4f298880f1a9037e08dc23b42 not found: ID does not exist" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.733317 4756 scope.go:117] "RemoveContainer" containerID="d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168" Feb 24 00:20:14 crc kubenswrapper[4756]: E0224 00:20:14.733617 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168\": container with ID starting with d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168 not found: ID does not exist" containerID="d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.733650 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168"} err="failed to get container status \"d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168\": rpc error: code = NotFound desc = could not find container \"d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168\": container with ID starting with d88d6289d0ed953269b0fbdd8c3c9b4aa7c0d7b1dc37bc4d8a5e2fb237583168 not found: ID does not exist" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.733666 4756 scope.go:117] "RemoveContainer" containerID="9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07" Feb 24 00:20:14 crc kubenswrapper[4756]: E0224 00:20:14.734195 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07\": container with ID starting with 9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07 not found: ID does not exist" containerID="9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.734227 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07"} err="failed to get container status \"9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07\": rpc error: code = NotFound desc = could not find container \"9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07\": container with ID starting with 9ff537f979cf099991ce937a45d9a629d81fe710e1ef21e7fe700a502ead3b07 not found: ID does not exist" Feb 24 00:20:14 crc kubenswrapper[4756]: I0224 00:20:14.808464 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qjbb7"] Feb 24 00:20:15 crc kubenswrapper[4756]: I0224 00:20:15.658147 4756 generic.go:334] "Generic (PLEG): container finished" podID="89804092-1928-4dd8-bc4e-8f78023b604e" containerID="bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491" exitCode=0 Feb 24 00:20:15 crc kubenswrapper[4756]: I0224 00:20:15.658241 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjbb7" event={"ID":"89804092-1928-4dd8-bc4e-8f78023b604e","Type":"ContainerDied","Data":"bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491"} Feb 24 00:20:15 crc kubenswrapper[4756]: I0224 00:20:15.658285 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjbb7" event={"ID":"89804092-1928-4dd8-bc4e-8f78023b604e","Type":"ContainerStarted","Data":"2430bc8dca9bbe9e8e49efa9fe695511de34bd6c17dbd98180f1e02cba38cafb"} Feb 24 00:20:15 crc kubenswrapper[4756]: I0224 00:20:15.842106 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2262628-7cb6-4042-a0ae-37d0a4ff79b8" path="/var/lib/kubelet/pods/a2262628-7cb6-4042-a0ae-37d0a4ff79b8/volumes" Feb 24 00:20:16 crc kubenswrapper[4756]: I0224 00:20:16.668405 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjbb7" event={"ID":"89804092-1928-4dd8-bc4e-8f78023b604e","Type":"ContainerStarted","Data":"9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7"} Feb 24 00:20:17 crc kubenswrapper[4756]: I0224 00:20:17.677880 4756 generic.go:334] "Generic (PLEG): container finished" podID="89804092-1928-4dd8-bc4e-8f78023b604e" containerID="9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7" exitCode=0 Feb 24 00:20:17 crc kubenswrapper[4756]: I0224 00:20:17.677937 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjbb7" event={"ID":"89804092-1928-4dd8-bc4e-8f78023b604e","Type":"ContainerDied","Data":"9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7"} Feb 24 00:20:17 crc kubenswrapper[4756]: I0224 00:20:17.912308 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-x8qdq_22e3c396-5773-4933-a2fe-7a0250aee650/prometheus-operator/0.log" Feb 24 00:20:18 crc kubenswrapper[4756]: I0224 00:20:18.063527 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t_f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6/prometheus-operator-admission-webhook/0.log" Feb 24 00:20:18 crc kubenswrapper[4756]: I0224 00:20:18.107270 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl_28ae67eb-01b5-40ad-8076-9013470234a9/prometheus-operator-admission-webhook/0.log" Feb 24 00:20:18 crc kubenswrapper[4756]: I0224 00:20:18.242227 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-82zhl_239cf156-509e-41e1-b1ac-f3ebe3fb4067/operator/0.log" Feb 24 00:20:18 crc kubenswrapper[4756]: I0224 00:20:18.337986 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-cqnkq_96e7fe96-4c58-44fd-b5a2-0fffa0e28e29/perses-operator/0.log" Feb 24 00:20:18 crc kubenswrapper[4756]: I0224 00:20:18.686245 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjbb7" event={"ID":"89804092-1928-4dd8-bc4e-8f78023b604e","Type":"ContainerStarted","Data":"3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478"} Feb 24 00:20:18 crc kubenswrapper[4756]: I0224 00:20:18.706118 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qjbb7" podStartSLOduration=2.091235108 podStartE2EDuration="4.706094084s" podCreationTimestamp="2026-02-24 00:20:14 +0000 UTC" firstStartedPulling="2026-02-24 00:20:15.660289672 +0000 UTC m=+872.571152305" lastFinishedPulling="2026-02-24 00:20:18.275148648 +0000 UTC m=+875.186011281" observedRunningTime="2026-02-24 00:20:18.703147602 +0000 UTC m=+875.614010255" watchObservedRunningTime="2026-02-24 00:20:18.706094084 +0000 UTC m=+875.616956727" Feb 24 00:20:24 crc kubenswrapper[4756]: I0224 00:20:24.539849 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:24 crc kubenswrapper[4756]: I0224 00:20:24.540565 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:25 crc kubenswrapper[4756]: I0224 00:20:25.593169 4756 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qjbb7" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="registry-server" probeResult="failure" output=< Feb 24 00:20:25 crc kubenswrapper[4756]: timeout: failed to connect service ":50051" within 1s Feb 24 00:20:25 crc kubenswrapper[4756]: > Feb 24 00:20:34 crc kubenswrapper[4756]: I0224 00:20:34.590732 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:34 crc kubenswrapper[4756]: I0224 00:20:34.646719 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:34 crc kubenswrapper[4756]: I0224 00:20:34.839693 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjbb7"] Feb 24 00:20:35 crc kubenswrapper[4756]: I0224 00:20:35.565788 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65_4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9/util/0.log" Feb 24 00:20:35 crc kubenswrapper[4756]: I0224 00:20:35.695739 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65_4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9/util/0.log" Feb 24 00:20:35 crc kubenswrapper[4756]: I0224 00:20:35.767822 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65_4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9/pull/0.log" Feb 24 00:20:35 crc kubenswrapper[4756]: I0224 00:20:35.792946 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65_4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9/pull/0.log" Feb 24 00:20:35 crc kubenswrapper[4756]: I0224 00:20:35.801559 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qjbb7" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="registry-server" containerID="cri-o://3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478" gracePeriod=2 Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.024613 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65_4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9/util/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.043341 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65_4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9/pull/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.064237 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f17vt65_4fd8c967-ae3a-481d-8f42-b1f23ad5c7f9/extract/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.237829 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9_b4cfc239-ad75-473e-a27a-67bf9092971d/util/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.239576 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.398750 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-utilities\") pod \"89804092-1928-4dd8-bc4e-8f78023b604e\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.398803 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlfqx\" (UniqueName: \"kubernetes.io/projected/89804092-1928-4dd8-bc4e-8f78023b604e-kube-api-access-dlfqx\") pod \"89804092-1928-4dd8-bc4e-8f78023b604e\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.398885 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-catalog-content\") pod \"89804092-1928-4dd8-bc4e-8f78023b604e\" (UID: \"89804092-1928-4dd8-bc4e-8f78023b604e\") " Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.399955 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-utilities" (OuterVolumeSpecName: "utilities") pod "89804092-1928-4dd8-bc4e-8f78023b604e" (UID: "89804092-1928-4dd8-bc4e-8f78023b604e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.406286 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89804092-1928-4dd8-bc4e-8f78023b604e-kube-api-access-dlfqx" (OuterVolumeSpecName: "kube-api-access-dlfqx") pod "89804092-1928-4dd8-bc4e-8f78023b604e" (UID: "89804092-1928-4dd8-bc4e-8f78023b604e"). InnerVolumeSpecName "kube-api-access-dlfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.500428 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.500464 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlfqx\" (UniqueName: \"kubernetes.io/projected/89804092-1928-4dd8-bc4e-8f78023b604e-kube-api-access-dlfqx\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.513374 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9_b4cfc239-ad75-473e-a27a-67bf9092971d/pull/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.524221 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89804092-1928-4dd8-bc4e-8f78023b604e" (UID: "89804092-1928-4dd8-bc4e-8f78023b604e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.571212 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9_b4cfc239-ad75-473e-a27a-67bf9092971d/util/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.601867 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89804092-1928-4dd8-bc4e-8f78023b604e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.602888 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9_b4cfc239-ad75-473e-a27a-67bf9092971d/pull/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.783455 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9_b4cfc239-ad75-473e-a27a-67bf9092971d/util/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.802036 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9_b4cfc239-ad75-473e-a27a-67bf9092971d/pull/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.810479 4756 generic.go:334] "Generic (PLEG): container finished" podID="89804092-1928-4dd8-bc4e-8f78023b604e" containerID="3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478" exitCode=0 Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.810549 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjbb7" event={"ID":"89804092-1928-4dd8-bc4e-8f78023b604e","Type":"ContainerDied","Data":"3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478"} Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.810603 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qjbb7" event={"ID":"89804092-1928-4dd8-bc4e-8f78023b604e","Type":"ContainerDied","Data":"2430bc8dca9bbe9e8e49efa9fe695511de34bd6c17dbd98180f1e02cba38cafb"} Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.810639 4756 scope.go:117] "RemoveContainer" containerID="3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.810853 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qjbb7" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.832491 4756 scope.go:117] "RemoveContainer" containerID="9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.839657 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fc2rl9_b4cfc239-ad75-473e-a27a-67bf9092971d/extract/0.log" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.856345 4756 scope.go:117] "RemoveContainer" containerID="bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.859022 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qjbb7"] Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.870307 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qjbb7"] Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.880405 4756 scope.go:117] "RemoveContainer" containerID="3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478" Feb 24 00:20:36 crc kubenswrapper[4756]: E0224 00:20:36.881734 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478\": container with ID starting with 3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478 not found: ID does not exist" containerID="3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.881799 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478"} err="failed to get container status \"3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478\": rpc error: code = NotFound desc = could not find container \"3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478\": container with ID starting with 3eaf0ba9181ec9d7ecbf56db0fc3d6eba6eb8a875a33bad3f1d607ff870cf478 not found: ID does not exist" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.881839 4756 scope.go:117] "RemoveContainer" containerID="9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7" Feb 24 00:20:36 crc kubenswrapper[4756]: E0224 00:20:36.882578 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7\": container with ID starting with 9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7 not found: ID does not exist" containerID="9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.882667 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7"} err="failed to get container status \"9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7\": rpc error: code = NotFound desc = could not find container \"9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7\": container with ID starting with 9ff91b2df2a8f280d00f5991a0dda0ae5af2dbf951ae6709423ba780c04ebcd7 not found: ID does not exist" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.882710 4756 scope.go:117] "RemoveContainer" containerID="bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491" Feb 24 00:20:36 crc kubenswrapper[4756]: E0224 00:20:36.883084 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491\": container with ID starting with bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491 not found: ID does not exist" containerID="bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491" Feb 24 00:20:36 crc kubenswrapper[4756]: I0224 00:20:36.883129 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491"} err="failed to get container status \"bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491\": rpc error: code = NotFound desc = could not find container \"bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491\": container with ID starting with bdd2a0876640701c9facd9d4b254a9e32805a56f9b7775baad7ca9c78cb04491 not found: ID does not exist" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.201246 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr_067fb988-cad9-439c-b782-2d988453e44a/util/0.log" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.389610 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr_067fb988-cad9-439c-b782-2d988453e44a/util/0.log" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.414974 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr_067fb988-cad9-439c-b782-2d988453e44a/pull/0.log" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.434967 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr_067fb988-cad9-439c-b782-2d988453e44a/pull/0.log" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.625759 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr_067fb988-cad9-439c-b782-2d988453e44a/util/0.log" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.677758 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr_067fb988-cad9-439c-b782-2d988453e44a/extract/0.log" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.683637 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5wd7gr_067fb988-cad9-439c-b782-2d988453e44a/pull/0.log" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.833526 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw_315518ca-e2d6-4701-9e8a-6792f2e4df31/util/0.log" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.866059 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" path="/var/lib/kubelet/pods/89804092-1928-4dd8-bc4e-8f78023b604e/volumes" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.869241 4756 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m2m5w"] Feb 24 00:20:37 crc kubenswrapper[4756]: E0224 00:20:37.871033 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="extract-utilities" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.871055 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="extract-utilities" Feb 24 00:20:37 crc kubenswrapper[4756]: E0224 00:20:37.871292 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="registry-server" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.871311 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="registry-server" Feb 24 00:20:37 crc kubenswrapper[4756]: E0224 00:20:37.871327 4756 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="extract-content" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.871335 4756 state_mem.go:107] "Deleted CPUSet assignment" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="extract-content" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.871758 4756 memory_manager.go:354] "RemoveStaleState removing state" podUID="89804092-1928-4dd8-bc4e-8f78023b604e" containerName="registry-server" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.873735 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:37 crc kubenswrapper[4756]: I0224 00:20:37.886128 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2m5w"] Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.020021 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-catalog-content\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.020132 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txvgk\" (UniqueName: \"kubernetes.io/projected/81bf450b-bb51-401e-8c42-4749975fa318-kube-api-access-txvgk\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.020189 4756 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-utilities\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.104616 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw_315518ca-e2d6-4701-9e8a-6792f2e4df31/pull/0.log" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.121757 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txvgk\" (UniqueName: \"kubernetes.io/projected/81bf450b-bb51-401e-8c42-4749975fa318-kube-api-access-txvgk\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.121844 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-utilities\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.121925 4756 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-catalog-content\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.122582 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-catalog-content\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.122737 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-utilities\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.132760 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw_315518ca-e2d6-4701-9e8a-6792f2e4df31/util/0.log" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.149462 4756 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txvgk\" (UniqueName: \"kubernetes.io/projected/81bf450b-bb51-401e-8c42-4749975fa318-kube-api-access-txvgk\") pod \"certified-operators-m2m5w\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.173177 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw_315518ca-e2d6-4701-9e8a-6792f2e4df31/pull/0.log" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.205666 4756 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.633511 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw_315518ca-e2d6-4701-9e8a-6792f2e4df31/extract/0.log" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.658757 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw_315518ca-e2d6-4701-9e8a-6792f2e4df31/util/0.log" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.709930 4756 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m2m5w"] Feb 24 00:20:38 crc kubenswrapper[4756]: W0224 00:20:38.712962 4756 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81bf450b_bb51_401e_8c42_4749975fa318.slice/crio-6baa65de1eabbf4f3eaef42b04e8a5d21c6e636a51ea6b8b2d4b0c4005410b1c WatchSource:0}: Error finding container 6baa65de1eabbf4f3eaef42b04e8a5d21c6e636a51ea6b8b2d4b0c4005410b1c: Status 404 returned error can't find the container with id 6baa65de1eabbf4f3eaef42b04e8a5d21c6e636a51ea6b8b2d4b0c4005410b1c Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.766174 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08rgcvw_315518ca-e2d6-4701-9e8a-6792f2e4df31/pull/0.log" Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.825648 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2m5w" event={"ID":"81bf450b-bb51-401e-8c42-4749975fa318","Type":"ContainerStarted","Data":"6baa65de1eabbf4f3eaef42b04e8a5d21c6e636a51ea6b8b2d4b0c4005410b1c"} Feb 24 00:20:38 crc kubenswrapper[4756]: I0224 00:20:38.974771 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s6swn_6c6e373e-56f6-40e1-94b4-d9c4116b0f9f/extract-utilities/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.195076 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s6swn_6c6e373e-56f6-40e1-94b4-d9c4116b0f9f/extract-utilities/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.251583 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s6swn_6c6e373e-56f6-40e1-94b4-d9c4116b0f9f/extract-content/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.267433 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s6swn_6c6e373e-56f6-40e1-94b4-d9c4116b0f9f/extract-content/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.400642 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s6swn_6c6e373e-56f6-40e1-94b4-d9c4116b0f9f/extract-content/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.416436 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s6swn_6c6e373e-56f6-40e1-94b4-d9c4116b0f9f/extract-utilities/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.465539 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s6swn_6c6e373e-56f6-40e1-94b4-d9c4116b0f9f/registry-server/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.775526 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjbbr_b1c23d9c-138c-4f2d-8e1b-10bf199f3c65/extract-utilities/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.834841 4756 generic.go:334] "Generic (PLEG): container finished" podID="81bf450b-bb51-401e-8c42-4749975fa318" containerID="09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c" exitCode=0 Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.844257 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2m5w" event={"ID":"81bf450b-bb51-401e-8c42-4749975fa318","Type":"ContainerDied","Data":"09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c"} Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.983383 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjbbr_b1c23d9c-138c-4f2d-8e1b-10bf199f3c65/extract-content/0.log" Feb 24 00:20:39 crc kubenswrapper[4756]: I0224 00:20:39.987370 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjbbr_b1c23d9c-138c-4f2d-8e1b-10bf199f3c65/extract-utilities/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.021373 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjbbr_b1c23d9c-138c-4f2d-8e1b-10bf199f3c65/extract-content/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.176591 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjbbr_b1c23d9c-138c-4f2d-8e1b-10bf199f3c65/extract-utilities/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.191850 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjbbr_b1c23d9c-138c-4f2d-8e1b-10bf199f3c65/extract-content/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.263573 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tjbbr_b1c23d9c-138c-4f2d-8e1b-10bf199f3c65/registry-server/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.274536 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z6g4k_4a552cc6-869f-4b5c-a95a-25892b560fa4/extract-utilities/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.473082 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z6g4k_4a552cc6-869f-4b5c-a95a-25892b560fa4/extract-content/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.489982 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z6g4k_4a552cc6-869f-4b5c-a95a-25892b560fa4/extract-content/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.524000 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z6g4k_4a552cc6-869f-4b5c-a95a-25892b560fa4/extract-utilities/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.766521 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z6g4k_4a552cc6-869f-4b5c-a95a-25892b560fa4/extract-utilities/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.799384 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z6g4k_4a552cc6-869f-4b5c-a95a-25892b560fa4/extract-content/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.842951 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2m5w" event={"ID":"81bf450b-bb51-401e-8c42-4749975fa318","Type":"ContainerStarted","Data":"e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0"} Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.855008 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-44tfc_31e6e2f7-8e2a-4013-89c2-f6520a649a87/extract-utilities/0.log" Feb 24 00:20:40 crc kubenswrapper[4756]: I0224 00:20:40.963994 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-z6g4k_4a552cc6-869f-4b5c-a95a-25892b560fa4/registry-server/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.056369 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-44tfc_31e6e2f7-8e2a-4013-89c2-f6520a649a87/extract-content/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.076867 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-44tfc_31e6e2f7-8e2a-4013-89c2-f6520a649a87/extract-utilities/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.128500 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-44tfc_31e6e2f7-8e2a-4013-89c2-f6520a649a87/extract-content/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.334218 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-44tfc_31e6e2f7-8e2a-4013-89c2-f6520a649a87/extract-utilities/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.349300 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq6rf_167c0e0e-ba56-4452-aed6-fd2857f9f3c7/extract-utilities/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.400878 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-44tfc_31e6e2f7-8e2a-4013-89c2-f6520a649a87/extract-content/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.513784 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-44tfc_31e6e2f7-8e2a-4013-89c2-f6520a649a87/registry-server/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.629782 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq6rf_167c0e0e-ba56-4452-aed6-fd2857f9f3c7/extract-utilities/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.646668 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq6rf_167c0e0e-ba56-4452-aed6-fd2857f9f3c7/extract-content/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.664077 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq6rf_167c0e0e-ba56-4452-aed6-fd2857f9f3c7/extract-content/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.853618 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq6rf_167c0e0e-ba56-4452-aed6-fd2857f9f3c7/extract-utilities/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.861692 4756 generic.go:334] "Generic (PLEG): container finished" podID="81bf450b-bb51-401e-8c42-4749975fa318" containerID="e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0" exitCode=0 Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.876361 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2m5w" event={"ID":"81bf450b-bb51-401e-8c42-4749975fa318","Type":"ContainerDied","Data":"e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0"} Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.927841 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq6rf_167c0e0e-ba56-4452-aed6-fd2857f9f3c7/registry-server/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.936191 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fq6rf_167c0e0e-ba56-4452-aed6-fd2857f9f3c7/extract-content/0.log" Feb 24 00:20:41 crc kubenswrapper[4756]: I0224 00:20:41.963985 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2gzm_23d0e11e-04f3-4913-9b33-be7aeb27232b/extract-utilities/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.206239 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2gzm_23d0e11e-04f3-4913-9b33-be7aeb27232b/extract-content/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.299357 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2gzm_23d0e11e-04f3-4913-9b33-be7aeb27232b/extract-utilities/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.308822 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2gzm_23d0e11e-04f3-4913-9b33-be7aeb27232b/extract-content/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.511499 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2gzm_23d0e11e-04f3-4913-9b33-be7aeb27232b/extract-content/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.546986 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2gzm_23d0e11e-04f3-4913-9b33-be7aeb27232b/extract-utilities/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.548833 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2gzm_23d0e11e-04f3-4913-9b33-be7aeb27232b/registry-server/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.562203 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/3.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.748611 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ddpsx_bf07722d-ecdf-4f68-8074-fac31ce286a5/extract-utilities/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.767009 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-r5ztj_69b6689d-ae32-4f32-a088-588b657e42ce/marketplace-operator/2.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.872335 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2m5w" event={"ID":"81bf450b-bb51-401e-8c42-4749975fa318","Type":"ContainerStarted","Data":"395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039"} Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.892451 4756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m2m5w" podStartSLOduration=3.502108528 podStartE2EDuration="5.892434066s" podCreationTimestamp="2026-02-24 00:20:37 +0000 UTC" firstStartedPulling="2026-02-24 00:20:39.837050726 +0000 UTC m=+896.747913359" lastFinishedPulling="2026-02-24 00:20:42.227376264 +0000 UTC m=+899.138238897" observedRunningTime="2026-02-24 00:20:42.890982616 +0000 UTC m=+899.801845259" watchObservedRunningTime="2026-02-24 00:20:42.892434066 +0000 UTC m=+899.803296689" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.987773 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ddpsx_bf07722d-ecdf-4f68-8074-fac31ce286a5/extract-content/0.log" Feb 24 00:20:42 crc kubenswrapper[4756]: I0224 00:20:42.993000 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ddpsx_bf07722d-ecdf-4f68-8074-fac31ce286a5/extract-content/0.log" Feb 24 00:20:43 crc kubenswrapper[4756]: I0224 00:20:43.020505 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ddpsx_bf07722d-ecdf-4f68-8074-fac31ce286a5/extract-utilities/0.log" Feb 24 00:20:43 crc kubenswrapper[4756]: I0224 00:20:43.234192 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ddpsx_bf07722d-ecdf-4f68-8074-fac31ce286a5/extract-content/0.log" Feb 24 00:20:43 crc kubenswrapper[4756]: I0224 00:20:43.313674 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ddpsx_bf07722d-ecdf-4f68-8074-fac31ce286a5/registry-server/0.log" Feb 24 00:20:43 crc kubenswrapper[4756]: I0224 00:20:43.330290 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-ddpsx_bf07722d-ecdf-4f68-8074-fac31ce286a5/extract-utilities/0.log" Feb 24 00:20:43 crc kubenswrapper[4756]: I0224 00:20:43.739357 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zpsd2_1bd46f43-d695-4e45-9396-78c6f5f64a89/extract-utilities/0.log" Feb 24 00:20:44 crc kubenswrapper[4756]: I0224 00:20:44.035160 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zpsd2_1bd46f43-d695-4e45-9396-78c6f5f64a89/extract-utilities/0.log" Feb 24 00:20:44 crc kubenswrapper[4756]: I0224 00:20:44.041715 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zpsd2_1bd46f43-d695-4e45-9396-78c6f5f64a89/extract-content/0.log" Feb 24 00:20:44 crc kubenswrapper[4756]: I0224 00:20:44.105397 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zpsd2_1bd46f43-d695-4e45-9396-78c6f5f64a89/extract-content/0.log" Feb 24 00:20:44 crc kubenswrapper[4756]: I0224 00:20:44.259934 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zpsd2_1bd46f43-d695-4e45-9396-78c6f5f64a89/extract-content/0.log" Feb 24 00:20:44 crc kubenswrapper[4756]: I0224 00:20:44.313484 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zpsd2_1bd46f43-d695-4e45-9396-78c6f5f64a89/extract-utilities/0.log" Feb 24 00:20:44 crc kubenswrapper[4756]: I0224 00:20:44.466508 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zpsd2_1bd46f43-d695-4e45-9396-78c6f5f64a89/registry-server/0.log" Feb 24 00:20:48 crc kubenswrapper[4756]: I0224 00:20:48.205895 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:48 crc kubenswrapper[4756]: I0224 00:20:48.206393 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:48 crc kubenswrapper[4756]: I0224 00:20:48.251548 4756 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:48 crc kubenswrapper[4756]: I0224 00:20:48.951645 4756 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:49 crc kubenswrapper[4756]: I0224 00:20:49.001560 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2m5w"] Feb 24 00:20:50 crc kubenswrapper[4756]: I0224 00:20:50.924460 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m2m5w" podUID="81bf450b-bb51-401e-8c42-4749975fa318" containerName="registry-server" containerID="cri-o://395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039" gracePeriod=2 Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.344306 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.430246 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-utilities\") pod \"81bf450b-bb51-401e-8c42-4749975fa318\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.430435 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-catalog-content\") pod \"81bf450b-bb51-401e-8c42-4749975fa318\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.431266 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-utilities" (OuterVolumeSpecName: "utilities") pod "81bf450b-bb51-401e-8c42-4749975fa318" (UID: "81bf450b-bb51-401e-8c42-4749975fa318"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.437383 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txvgk\" (UniqueName: \"kubernetes.io/projected/81bf450b-bb51-401e-8c42-4749975fa318-kube-api-access-txvgk\") pod \"81bf450b-bb51-401e-8c42-4749975fa318\" (UID: \"81bf450b-bb51-401e-8c42-4749975fa318\") " Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.437845 4756 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-utilities\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.444490 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bf450b-bb51-401e-8c42-4749975fa318-kube-api-access-txvgk" (OuterVolumeSpecName: "kube-api-access-txvgk") pod "81bf450b-bb51-401e-8c42-4749975fa318" (UID: "81bf450b-bb51-401e-8c42-4749975fa318"). InnerVolumeSpecName "kube-api-access-txvgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.487317 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81bf450b-bb51-401e-8c42-4749975fa318" (UID: "81bf450b-bb51-401e-8c42-4749975fa318"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.539869 4756 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81bf450b-bb51-401e-8c42-4749975fa318-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.539914 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txvgk\" (UniqueName: \"kubernetes.io/projected/81bf450b-bb51-401e-8c42-4749975fa318-kube-api-access-txvgk\") on node \"crc\" DevicePath \"\"" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.933153 4756 generic.go:334] "Generic (PLEG): container finished" podID="81bf450b-bb51-401e-8c42-4749975fa318" containerID="395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039" exitCode=0 Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.933217 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2m5w" event={"ID":"81bf450b-bb51-401e-8c42-4749975fa318","Type":"ContainerDied","Data":"395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039"} Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.933257 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m2m5w" event={"ID":"81bf450b-bb51-401e-8c42-4749975fa318","Type":"ContainerDied","Data":"6baa65de1eabbf4f3eaef42b04e8a5d21c6e636a51ea6b8b2d4b0c4005410b1c"} Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.933283 4756 scope.go:117] "RemoveContainer" containerID="395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.933640 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m2m5w" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.956565 4756 scope.go:117] "RemoveContainer" containerID="e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0" Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.963668 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m2m5w"] Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.970523 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m2m5w"] Feb 24 00:20:51 crc kubenswrapper[4756]: I0224 00:20:51.980907 4756 scope.go:117] "RemoveContainer" containerID="09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c" Feb 24 00:20:52 crc kubenswrapper[4756]: I0224 00:20:52.002641 4756 scope.go:117] "RemoveContainer" containerID="395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039" Feb 24 00:20:52 crc kubenswrapper[4756]: E0224 00:20:52.003607 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039\": container with ID starting with 395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039 not found: ID does not exist" containerID="395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039" Feb 24 00:20:52 crc kubenswrapper[4756]: I0224 00:20:52.003655 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039"} err="failed to get container status \"395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039\": rpc error: code = NotFound desc = could not find container \"395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039\": container with ID starting with 395b9a4080c373300ffb8ed048a5c5a56e87bbf1233768a8dbf6087d2a519039 not found: ID does not exist" Feb 24 00:20:52 crc kubenswrapper[4756]: I0224 00:20:52.003685 4756 scope.go:117] "RemoveContainer" containerID="e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0" Feb 24 00:20:52 crc kubenswrapper[4756]: E0224 00:20:52.004749 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0\": container with ID starting with e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0 not found: ID does not exist" containerID="e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0" Feb 24 00:20:52 crc kubenswrapper[4756]: I0224 00:20:52.004788 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0"} err="failed to get container status \"e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0\": rpc error: code = NotFound desc = could not find container \"e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0\": container with ID starting with e915528c26e48a4ab6d1ea1c2a20a002950e176d23ff8b57e47ad233141bc9d0 not found: ID does not exist" Feb 24 00:20:52 crc kubenswrapper[4756]: I0224 00:20:52.004819 4756 scope.go:117] "RemoveContainer" containerID="09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c" Feb 24 00:20:52 crc kubenswrapper[4756]: E0224 00:20:52.006499 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c\": container with ID starting with 09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c not found: ID does not exist" containerID="09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c" Feb 24 00:20:52 crc kubenswrapper[4756]: I0224 00:20:52.006533 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c"} err="failed to get container status \"09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c\": rpc error: code = NotFound desc = could not find container \"09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c\": container with ID starting with 09f22c19797bfdc54087eb091c33228e764e683abf57592baac026d47037657c not found: ID does not exist" Feb 24 00:20:53 crc kubenswrapper[4756]: I0224 00:20:53.841181 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81bf450b-bb51-401e-8c42-4749975fa318" path="/var/lib/kubelet/pods/81bf450b-bb51-401e-8c42-4749975fa318/volumes" Feb 24 00:20:57 crc kubenswrapper[4756]: I0224 00:20:57.125781 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-x8qdq_22e3c396-5773-4933-a2fe-7a0250aee650/prometheus-operator/0.log" Feb 24 00:20:57 crc kubenswrapper[4756]: I0224 00:20:57.161775 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cdf7cd6b6-pbn7t_f73c57e2-cf3f-4e43-9bda-936b9fbd3ae6/prometheus-operator-admission-webhook/0.log" Feb 24 00:20:57 crc kubenswrapper[4756]: I0224 00:20:57.179798 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cdf7cd6b6-tllsl_28ae67eb-01b5-40ad-8076-9013470234a9/prometheus-operator-admission-webhook/0.log" Feb 24 00:20:57 crc kubenswrapper[4756]: I0224 00:20:57.311282 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-82zhl_239cf156-509e-41e1-b1ac-f3ebe3fb4067/operator/0.log" Feb 24 00:20:57 crc kubenswrapper[4756]: I0224 00:20:57.333689 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-cqnkq_96e7fe96-4c58-44fd-b5a2-0fffa0e28e29/perses-operator/0.log" Feb 24 00:21:51 crc kubenswrapper[4756]: I0224 00:21:51.444115 4756 generic.go:334] "Generic (PLEG): container finished" podID="f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6" containerID="26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850" exitCode=0 Feb 24 00:21:51 crc kubenswrapper[4756]: I0224 00:21:51.444239 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tmxcd/must-gather-zk8st" event={"ID":"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6","Type":"ContainerDied","Data":"26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850"} Feb 24 00:21:51 crc kubenswrapper[4756]: I0224 00:21:51.445496 4756 scope.go:117] "RemoveContainer" containerID="26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850" Feb 24 00:21:52 crc kubenswrapper[4756]: I0224 00:21:52.311044 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tmxcd_must-gather-zk8st_f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6/gather/0.log" Feb 24 00:21:52 crc kubenswrapper[4756]: I0224 00:21:52.710923 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:21:52 crc kubenswrapper[4756]: I0224 00:21:52.711035 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.090437 4756 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tmxcd/must-gather-zk8st"] Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.091451 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tmxcd/must-gather-zk8st" podUID="f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6" containerName="copy" containerID="cri-o://3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8" gracePeriod=2 Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.094901 4756 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tmxcd/must-gather-zk8st"] Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.501763 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tmxcd_must-gather-zk8st_f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6/copy/0.log" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.502726 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.516958 4756 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tmxcd_must-gather-zk8st_f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6/copy/0.log" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.517631 4756 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tmxcd/must-gather-zk8st" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.517742 4756 scope.go:117] "RemoveContainer" containerID="3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.518201 4756 generic.go:334] "Generic (PLEG): container finished" podID="f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6" containerID="3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8" exitCode=143 Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.539387 4756 scope.go:117] "RemoveContainer" containerID="26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.552679 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-must-gather-output\") pod \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\" (UID: \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\") " Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.552771 4756 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-kube-api-access-ph7lk\") pod \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\" (UID: \"f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6\") " Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.562051 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-kube-api-access-ph7lk" (OuterVolumeSpecName: "kube-api-access-ph7lk") pod "f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6" (UID: "f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6"). InnerVolumeSpecName "kube-api-access-ph7lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.601893 4756 scope.go:117] "RemoveContainer" containerID="3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8" Feb 24 00:21:59 crc kubenswrapper[4756]: E0224 00:21:59.602438 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8\": container with ID starting with 3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8 not found: ID does not exist" containerID="3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.602477 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8"} err="failed to get container status \"3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8\": rpc error: code = NotFound desc = could not find container \"3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8\": container with ID starting with 3a3cb71016ca6f953abb3814764b01b3f6d7329b5acb0ca814ba23b5de9658a8 not found: ID does not exist" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.602504 4756 scope.go:117] "RemoveContainer" containerID="26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850" Feb 24 00:21:59 crc kubenswrapper[4756]: E0224 00:21:59.602747 4756 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850\": container with ID starting with 26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850 not found: ID does not exist" containerID="26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.602780 4756 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850"} err="failed to get container status \"26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850\": rpc error: code = NotFound desc = could not find container \"26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850\": container with ID starting with 26338d6aff0e08a838be5fcb77782d71729c05cf62ce275abd70890fd18ec850 not found: ID does not exist" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.636169 4756 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6" (UID: "f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.660819 4756 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.660865 4756 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6-kube-api-access-ph7lk\") on node \"crc\" DevicePath \"\"" Feb 24 00:21:59 crc kubenswrapper[4756]: I0224 00:21:59.844355 4756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6" path="/var/lib/kubelet/pods/f9fbe9ba-5532-46c3-9bd4-ce6634d1c1a6/volumes" Feb 24 00:22:22 crc kubenswrapper[4756]: I0224 00:22:22.710688 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:22:22 crc kubenswrapper[4756]: I0224 00:22:22.712357 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:22:52 crc kubenswrapper[4756]: I0224 00:22:52.711856 4756 patch_prober.go:28] interesting pod/machine-config-daemon-qb88h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 24 00:22:52 crc kubenswrapper[4756]: I0224 00:22:52.712751 4756 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 24 00:22:52 crc kubenswrapper[4756]: I0224 00:22:52.712827 4756 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" Feb 24 00:22:52 crc kubenswrapper[4756]: I0224 00:22:52.713874 4756 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc2ac906d4539078c314833b1bd446386b29d29b18184d97ac48c21e47011591"} pod="openshift-machine-config-operator/machine-config-daemon-qb88h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 24 00:22:52 crc kubenswrapper[4756]: I0224 00:22:52.713973 4756 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" podUID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerName="machine-config-daemon" containerID="cri-o://fc2ac906d4539078c314833b1bd446386b29d29b18184d97ac48c21e47011591" gracePeriod=600 Feb 24 00:22:52 crc kubenswrapper[4756]: I0224 00:22:52.972213 4756 generic.go:334] "Generic (PLEG): container finished" podID="071714b1-b44e-4085-adf5-0ed6b6e64af3" containerID="fc2ac906d4539078c314833b1bd446386b29d29b18184d97ac48c21e47011591" exitCode=0 Feb 24 00:22:52 crc kubenswrapper[4756]: I0224 00:22:52.972277 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerDied","Data":"fc2ac906d4539078c314833b1bd446386b29d29b18184d97ac48c21e47011591"} Feb 24 00:22:52 crc kubenswrapper[4756]: I0224 00:22:52.972325 4756 scope.go:117] "RemoveContainer" containerID="1d01c4008be0b2356d9dc9b4c088f54b7dea3397664e049fd73bc626db7cec60" Feb 24 00:22:53 crc kubenswrapper[4756]: I0224 00:22:53.985206 4756 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qb88h" event={"ID":"071714b1-b44e-4085-adf5-0ed6b6e64af3","Type":"ContainerStarted","Data":"3030e2196d3d1df98d91b5b1a3ea759bc428a8beb29589805f90646ad3374e9c"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515147167711024457 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015147167712017375 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015147165226016516 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015147165226015466 5ustar corecore